Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (30,401)

Search Parameters:
Keywords = the test information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 8678 KB  
Review
Research on Silver-Based Wound Dressing: An Ontological Analysis
by Prabir K. Dutta, Thant Syn and Arkalgud Ramaprasad
Antibiotics 2026, 15(5), 462; https://doi.org/10.3390/antibiotics15050462 (registering DOI) - 2 May 2026
Abstract
Background/Objectives: Silver’s ability to kill pathogenic bacteria is being widely researched in environment, consumer, and health-related applications. One topic of voluminous research is the antimicrobial properties of silver and silver in wound dressings. This research literature has been reviewed in articles using qualitative [...] Read more.
Background/Objectives: Silver’s ability to kill pathogenic bacteria is being widely researched in environment, consumer, and health-related applications. One topic of voluminous research is the antimicrobial properties of silver and silver in wound dressings. This research literature has been reviewed in articles using qualitative analyses, meta-analyses, systematic reviews, bibliometric analyses, and other grounded methods. We present a new strategy for the analysis of the population of articles on the subject based on an ontology of this topic. Methods: A search of the Scopus database for all peer-reviewed articles on silver in wound dressings yielded a population of 4711 relevant ones. The ontology is a logical deconstruction of the problem: “use of silver species on nanosupports deposited on a matrix with antimicrobial effectiveness assayed by methods to promote wound healing of chronic wounds as determined by recovery”. Each bolded term denotes a dimension of the ontology, and each dimension denotes a taxonomy of constituent elements. A Convolutional Neural Network (CNN) was trained using a manually mapped subset of articles. The CNN was then used to map the population of articles. Results: Out of the 4711 articles, 3079 dealt with silver and wound dressings; the others involved silver, but were not related to wound dressings and were not considered. Overall analysis shows that three classes of silver encompass the entire field: silver nanoparticles (AgNP) (78% of papers), inorganic silver-ion-containing species (7%) and silver associated with organic molecules (15%). AgNP papers have grown exponentially beginning in the early 2000s; there is no clear trend regarding inorganic silver-containing-species papers; whereas with the silver-organics species papers, there has been growth in the past decades, but now the number of publications is stabilizing. Research on the AgNPs has primarily focused on in vitro testing (54%), with very limited animal testing (17%) and human testing (3%). On the other hand, with silver-organics, animal (30%) and human testing (38%) are prominent. Inorganic silver ion species also have been human-tested extensively (43%). Thus, in clinical applications of silver wound dressings, AgNP lags considerably as compared to the other silver species, though academic research in AgNP is robust. Conclusions: From detailed temporal visualizations of the ontological mapping, the antecedents and consequences of silver in wound dressings are presented. This first ontological analysis is a novel way of visualizing an entire research field and the temporal characteristics of the various dimensions of the ontology provide information on the current state of research as well as where the field is headed. Full article
(This article belongs to the Special Issue Metal-Based Antibiotics and Therapeutics)
Show Figures

Figure 1

26 pages, 4255 KB  
Article
Integration of Multi-Level Wavelet Decomposition and CNN for Brain Tumor MRI Classification
by Mahammad Ismayilov and Dalia Čalnerytė
Appl. Sci. 2026, 16(9), 4482; https://doi.org/10.3390/app16094482 (registering DOI) - 2 May 2026
Abstract
Magnetic resonance imaging (MRI) remains one of the most important tests for diagnosing and monitoring various diseases. In recent years, machine learning methods have been widely applied to automate MRI analysis. It supports decision-making by predicting disease and highlighting relevant regions. However, the [...] Read more.
Magnetic resonance imaging (MRI) remains one of the most important tests for diagnosing and monitoring various diseases. In recent years, machine learning methods have been widely applied to automate MRI analysis. It supports decision-making by predicting disease and highlighting relevant regions. However, the proper use of feature extraction methods can improve the performance of the model. This paper proposes a WaveletFusion architecture that combines a two-dimensional Haar wavelet decomposition with a convolutional neural network (CNN) for classification. The approach was demonstrated on the Brain Tumor MRI dataset and further examined on the Br35H :: Brain Tumor Detection 2020 (Br35H). The model decomposes each MRI slice into approximation and directional detail subbands and fuses multi-scale wavelet features within the convolutional pipeline. To evaluate the effect of decomposition depth, WaveletFusion variants from one to eight levels were compared with a Baseline CNN model under the same training protocol. The results showed that performance improved progressively with increasing decomposition depth up to level 7, whereas the 8-level configuration consistently declined, indicating that excessive decomposition introduces information loss and over-compression in the deepest approximation pathway. The best-performing configuration, which outperformed both the Baseline CNN and the WaveletFusion variations in five independent runs, was the 7-level WaveletFusion model, achieving a test accuracy of 0.94 ± 0.01 and test macro-F1 of 0.93 ± 0.02. A similar tendency was observed on the Br35H dataset, where the 7-level model achieved a 0.97 ± 0.01 test accuracy and 0.97 ± 0.01 test macro-F1, while the 8-level configuration remained weaker on both datasets. These results show that multi-scale wavelet fusion can improve Brain Tumor MRI classification while maintaining a compact model size and a fair comparison setting, and that the decomposition depth must be selected carefully. Full article
27 pages, 3299 KB  
Article
Neural Network Copulas for Generating Synthetic Test Data Preserving Psychometric Properties
by Juyoung Jung, Minho Lee and Won-Chan Lee
J. Intell. 2026, 14(5), 77; https://doi.org/10.3390/jintelligence14050077 (registering DOI) - 2 May 2026
Abstract
In intelligence research, the sharing of item response data from cognitive ability assessments is often restricted by privacy concerns, while traditional parametric simulation methods frequently fail to capture complex response dependencies. This study proposes a neural network copula (NNC) framework for generating synthetic [...] Read more.
In intelligence research, the sharing of item response data from cognitive ability assessments is often restricted by privacy concerns, while traditional parametric simulation methods frequently fail to capture complex response dependencies. This study proposes a neural network copula (NNC) framework for generating synthetic dichotomous item response data that preserves essential psychometric properties without revealing sensitive examinee information. By decoupling the modeling of marginal item probabilities from the dependence structure using a deep autoencoder and kernel density estimation, the framework accommodates the discrete nature of binary item response data while minimizing distributional assumptions. Validation against large-scale empirical data demonstrated high correspondence across multiple facets. At the data consistency level, the NNC-based synthetic data reproduced total score distributions and inter-item correlations. Psychometrically, the method yielded consistent item characteristic curve parameter estimates, item fit statistics, and test information functions. Furthermore, Monte Carlo replications demonstrated algorithmic stability and inferential precision. Full article
Show Figures

Figure 1

32 pages, 6629 KB  
Article
Risk-Aware Downlink Throughput Prediction in High-Density 5G Networks
by Najem N. Sirhan, Riyad Alrousan, Samar Al-Saqqa, Faten Hamad and Zaid Khrisat
Computation 2026, 14(5), 105; https://doi.org/10.3390/computation14050105 (registering DOI) - 2 May 2026
Abstract
Accurate short-horizon downlink throughput prediction is essential for automation in high-density 5G deployments (e.g., stadiums and events), where user load, scheduling decisions, and interference conditions change rapidly and produce highly variable user-perceived rates. This paper benchmarks lightweight regression models for per-user throughput prediction [...] Read more.
Accurate short-horizon downlink throughput prediction is essential for automation in high-density 5G deployments (e.g., stadiums and events), where user load, scheduling decisions, and interference conditions change rapidly and produce highly variable user-perceived rates. This paper benchmarks lightweight regression models for per-user throughput prediction from readily available radio access network (RAN) key performance indicators (KPIs) and studies a risk-aware extension that augments point forecasts with calibrated uncertainty and an abstention (deferral) rule. Experiments use a strictly time-ordered train/calibration/test protocol on the Liverpool 5G High-Density Demand (L5GHDD) dataset. The target is strongly zero-inflated (about 62% of samples at 0 Mbps) and heavy-tailed, creating regimes where average-error optimization can mask rare but operationally important bursts. In the point-prediction benchmark, the best model is a tuned two-stage support vector regressor with a mean absolute error (MAE) of 0.452 Mbps, while the strongest single-stage model attains a weighted mean absolute percentage error (WMAPE) of 56.200%. For uncertainty quantification, we compare standard split conformal prediction against two input-adaptive alternatives. Constant-width split conformal attains 88.900% marginal coverage for a nominal 90% target with an average interval width of 2.288 Mbps, but width-based deferral is degenerate because all intervals have the same size. Variable-length conformal intervals preserve near-nominal coverage (91.100%) while producing informative width variation: normalized conformal reduces the average width to 1.344 Mbps, and conformalized quantile regression reduces it to 0.641 Mbps. At a deferral threshold of 1.5 Mbps, constant-width conformal defers all samples, whereas normalized conformal still acts on 61.200% of samples with selective MAE 0.219 Mbps. These results show that input-adaptive uncertainty is necessary for meaningful selective prediction in heteroscedastic 5G throughput dynamics. Full article
(This article belongs to the Section Computational Engineering)
20 pages, 3674 KB  
Article
IMU-Based Time-Domain Fault Diagnosis of BLDC Motors Using an End-to-End 1D-CNN
by Ke Hao Wang, Hwi Gyu Lee, Seon Min Yoo and In Soo Lee
Modelling 2026, 7(3), 89; https://doi.org/10.3390/modelling7030089 (registering DOI) - 2 May 2026
Abstract
Reliable fault detection in brushless DC motors is challenging owing to environmental complexity and high equipment costs. To address these challenges, we propose an effective and cost-effective approach using an optimized end-to-end one-dimensional convolutional neural network. Specifically, a real experimental platform simulating bearing [...] Read more.
Reliable fault detection in brushless DC motors is challenging owing to environmental complexity and high equipment costs. To address these challenges, we propose an effective and cost-effective approach using an optimized end-to-end one-dimensional convolutional neural network. Specifically, a real experimental platform simulating bearing and eccentricity faults was developed. Statistical t-tests indicated that three-axis accelerometer signals from a low-cost inertial measurement unit provided sufficient fault information for the present diagnosis task. Unlike traditional methods such as support vector machines, multilayer neural networks, and random forests, which rely on manual feature extraction, our model learns directly from raw waveforms and can handle signal drift. Under the present controlled experimental setting and the leave-one-day-out evaluation protocol, the model achieved 100.00% average window-level classification accuracy, considerably outperforming traditional methods, the performances of which declined to 67.95–71.37% under environmental shifts. Moreover, with an inference time of only 0.96 ms, 32 times faster than that of random forests, this approach is well suited for real-time embedded monitoring. The proposed method demonstrates strong potential for cost-efficient and robust fault diagnosis under the present experimental setting. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence in Modelling)
Show Figures

Figure 1

13 pages, 323 KB  
Article
Oculometric Function More Strongly Predicts Working Memory than Stress in Military Officers
by Mollie McGuire, Neda Bahrani, Quinn Kennedy and Dorion Liston
J. Eye Mov. Res. 2026, 19(3), 46; https://doi.org/10.3390/jemr19030046 (registering DOI) - 2 May 2026
Abstract
Working memory, the capacity to store information for near-immediate use, and visual attention, the ability to focus on task-relevant information, are integral skills for military personnel. In civilian populations, stress is associated with worse skills. However, little is known about the relationship between [...] Read more.
Working memory, the capacity to store information for near-immediate use, and visual attention, the ability to focus on task-relevant information, are integral skills for military personnel. In civilian populations, stress is associated with worse skills. However, little is known about the relationship between stress, working memory, and visual attention in military officers, who are trained to handle acute stress and operate in high-stress environments. Thirty-three military officers completed a working memory test, a Perceived Stress Questionnaire (PSQ), and an oculometric assessment of visual tracking. The oculometric test was a modified step-ramp test that produces 10 z-scored metrics. Working memory and executive function were assessed via the n-back task. Oculometric performance and self-reported stress levels were independently associated with n-back accuracy, explaining 67% of the variance (adjusted R2, n = 30). The association between oculometric performance and n-back accuracy was driven by directional anisotropy, directional noise and proportion of smooth pursuit. The association between oculometric performance and stress was complicated by sex differences. Results have important implications for the assessment of cognitive readiness in military populations. The strong relationship between oculometric performance and working memory suggests that eye-tracking-based metrics may serve as candidate indicators of cognitive function under operational demands. Full article
Show Figures

Figure 1

24 pages, 1603 KB  
Article
Deep Reinforcement Learning for Cryptocurrency Portfolio Management: A Free-Energy Framework with Geometry-Based Transaction Costs and Efficiency Bounds
by Ntebogang Dinah Moroke
Risks 2026, 14(5), 103; https://doi.org/10.3390/risks14050103 (registering DOI) - 2 May 2026
Abstract
This paper develops a deep reinforcement learning framework for cryptocurrency portfolio management in which transaction costs are derived from the Riemannian geometry of the underlying volatility model rather than assumed constant. A Proximal Policy Optimisation agent is trained on a reward function grounded [...] Read more.
This paper develops a deep reinforcement learning framework for cryptocurrency portfolio management in which transaction costs are derived from the Riemannian geometry of the underlying volatility model rather than assumed constant. A Proximal Policy Optimisation agent is trained on a reward function grounded in non-equilibrium thermodynamics: we use the free-energy Bellman equation, in which transaction costs are the geodesic slippage on the Fisher information manifold of a maximum-entropy Markov-switching GARCH model, and regime-transition costs are the Wasserstein-2 distance between the calm and turbulent return distributions. A thermodynamic Carnot bound on portfolio efficiency is established and empirically validated. Five hypotheses are tested across Bitcoin, Ethereum, Ripple, Litecoin, and Bitcoin Cash over January 2017 to March 2026. The geometric-cost agent achieves statistically superior Sharpe ratios relative to flat-fee baselines on four of five assets; portfolio turnover is reduced by 56 to 83 percent relative to signal-following; the thermodynamic friction point at which the agent prefers no-trade is asset-specific and ordered by turbulent half-life; a joint topological and geometric circuit breaker reduces Maximum Drawdown by 28 to 38 percent; and ablation confirms that every component of the observation vector contributes a statistically significant performance gain. The framework requires liquid cryptocurrency markets with validated parametric volatility models; transferability to other asset classes requires upstream recalibration. Full article
(This article belongs to the Special Issue AI-Driven Financial Econometrics and Risk Management)
20 pages, 877 KB  
Article
Artificial Intelligence in Cancer Research: Modality Dependence and Limited Visual–Spatial Integration in Multimodal Large Language Models for Breast Cancer Histopathology
by Ibrahim Güler, Armin Kraus, Gerrit Grieb, Tevfik Satir and Henrik Stelling
Life 2026, 16(5), 763; https://doi.org/10.3390/life16050763 (registering DOI) - 2 May 2026
Abstract
Multimodal large language models (MLLMs) are increasingly considered for cancer diagnostic support, yet their suitability for histopathological image interpretation remains inadequately characterized. We evaluated six contemporary general-purpose MLLMs (Claude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5, ChatGPT 5.3, Grok 4.2, Gemini 3.1 [...] Read more.
Multimodal large language models (MLLMs) are increasingly considered for cancer diagnostic support, yet their suitability for histopathological image interpretation remains inadequately characterized. We evaluated six contemporary general-purpose MLLMs (Claude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5, ChatGPT 5.3, Grok 4.2, Gemini 3.1 Pro) on 58 paired hematoxylin and eosin (H&E)-stained breast cancer histopathology images (26 malignant, 32 benign) and corresponding nuclei segmentation masks. Each case was classified five times per model under three conditions, image only (IMAGE), mask only (MASK), and both combined (BOTH), yielding 5220 observations. Mean accuracy dropped from 69.4% (IMAGE) to 49.6% (MASK), below the majority-class baseline of 55.2%. Providing the mask together with the image did not improve classification (68.0%), and for ChatGPT 5.3 produced a net loss of 31 correct predictions. Models maintained elevated mean confidence (67.6) under MASK despite near-random accuracy, and reasoning categories shifted in 67.5% of matched case–run pairs between modalities. Under the conditions tested, current general-purpose MLLMs exhibit strong dependence on visual surface features, fail to effectively integrate spatial structural information, and maintain confidence independent of accuracy. These behavioral limitations are directly relevant to the safe deployment of MLLMs in cancer diagnostic workflows. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
20 pages, 3648 KB  
Article
Effective Mode Approximation for Probabilistic Verification of Collective Hamiltonians in Large Continuous-Variable Quantum Systems
by José R. Rosas-Bustos, Jesse Van Griensven Thé, Roydon Andrew Fraser, Nadeem Said, Sebastian Ratto Valderrama, Mark Pecen, Alexander Truskovsky and Andy Thanos
Entropy 2026, 28(5), 514; https://doi.org/10.3390/e28050514 (registering DOI) - 2 May 2026
Abstract
The Effective Mode Approximation (EMA) is a verification-oriented framework for characterizing collective Hamiltonian dynamics in large continuous-variable (CV) quantum systems from experimentally accessible collective measurements. Rather than reconstructing a full mode-resolved Hamiltonian, EMA maps the observed dynamics onto a canonically normalized collective mode [...] Read more.
The Effective Mode Approximation (EMA) is a verification-oriented framework for characterizing collective Hamiltonian dynamics in large continuous-variable (CV) quantum systems from experimentally accessible collective measurements. Rather than reconstructing a full mode-resolved Hamiltonian, EMA maps the observed dynamics onto a canonically normalized collective mode and tests whether summed quadrature trajectories are consistent with an effective harmonic description. We validate EMA using time-resolved homodyne sampling in Gaussian simulations of ring-coupled multi-qu-mode optical systems with N=8,16,32, and 64 modes. One-tone and two-tone sinusoidal models, selected using the Akaike Information Criterion (AIC), recover a stable dominant collective frequency across system size and produce residuals that remain centred near zero. The results show that EMA can verify dominant collective behaviour with a fixed number of effective parameters even when full microscopic reconstruction is impractical. EMA is therefore best understood not as a full-state ansatz, but as a low-overhead tool for validating collective dynamics under realistic measurement constraints in scalable CV hardware. Full article
(This article belongs to the Section Quantum Information)
12 pages, 881 KB  
Article
Static and Dynamic Motor Control in Active Young Adults: Associations with Oswestry Disability Index and Functional Movement Screen Asymmetries
by Julio Martín-Ruiz and Iván Chulvi-Medrano
Healthcare 2026, 14(9), 1223; https://doi.org/10.3390/healthcare14091223 (registering DOI) - 2 May 2026
Abstract
Background: Low back pain (LBP) is a leading cause of disability, particularly in young adults. Decreased trunk endurance and altered movement patterns have been associated with lumbar symptoms and functional limitations; however, their concurrent relationships in active populations with minimal disability remain insufficiently [...] Read more.
Background: Low back pain (LBP) is a leading cause of disability, particularly in young adults. Decreased trunk endurance and altered movement patterns have been associated with lumbar symptoms and functional limitations; however, their concurrent relationships in active populations with minimal disability remain insufficiently characterized. This study was designed as an exploratory cross-sectional analytical study. Methods: The sample comprised 71 physically active university students (mean age, ~23 years; 79% men). Trunk endurance was assessed using the McGill isometric tests, and selected movement-pattern measures were obtained from four Functional Movement Screen (FMS) tasks focused on lumbopelvic control. The total FMS score was calculated, asymmetries were recorded in the Inline Lunge and Rotary Stability tasks, and lumbar-related disability was measured using the Oswestry Disability Index (ODI). Associations were analyzed using correlations and adjusted linear regression, and asymmetry-based comparisons were evaluated using non-parametric tests. Results: The average ODI was very low (approximately 4%), suggesting a floor effect. Greater trunk endurance was associated with lower ODI values, whereas the association between total FMS and ODI was weak and did not reach statistical significance in the adjusted model. Inline Lunge asymmetry was associated with higher ODI values, but this finding should be interpreted cautiously because of the very small subgroup size. Conclusions: In this physically active young adult sample, trunk endurance and selected movement-pattern measures provided complementary descriptive information on lumbar-related function; however, the observed associations were modest and should be interpreted cautiously. Full article
Show Figures

Figure 1

27 pages, 2474 KB  
Article
Thermal Characterization of Innovative Insulating Materials Through Different Methods: An Intra-Laboratory Study
by Giorgio Baldinelli, Francesco Asdrubali, Chiara Chiatti, Dante Maria Gandola, Stefano Fantucci, Valentina Serra, Valeria Villamil Cárdenas, Giorgia Autretto, Rossella Cottone and Cristiano Turrioni
Sustainability 2026, 18(9), 4474; https://doi.org/10.3390/su18094474 (registering DOI) - 2 May 2026
Abstract
Accurate thermal characterization of building insulation materials is essential for reliable energy performance assessment, regulatory compliance, and the development of high-performance envelopes. On one hand, the growing adoption of innovative insulating products, such as nanoporous materials, aerogel-based composites, bio-based panels, and thin insulating [...] Read more.
Accurate thermal characterization of building insulation materials is essential for reliable energy performance assessment, regulatory compliance, and the development of high-performance envelopes. On one hand, the growing adoption of innovative insulating products, such as nanoporous materials, aerogel-based composites, bio-based panels, and thin insulating coatings, helps to enhance buildings’ energy efficiency by means of sustainable raw materials. On the other hand, conventional measurement techniques encounter significant challenges, due to their heterogeneity, reduced thickness, and unconventional geometries. In this study, an intra-laboratory comparison of three widely used methods for thermal conductivity determination is presented: the Transient Plane Source (TPS, Hot Disk) method, the Guarded Hot Plate (GHP) method, and the Heat Flow Meter (HFM) method. A total of twelve insulating materials, spanning super-insulating cores, insulating renders, bio-based panels, and nanocomposite coatings, were experimentally characterized under controlled laboratory conditions. A view on the analyzed insulating materials’ cradle-to-grave environmental impact is also given, to enhance the users’ awareness for the highly informed choice. The results highlight systematic differences between transient and steady-state approaches, with TPS measurements generally exhibiting larger deviations for materials characterized by surface roughness, limited thickness, or strong internal heterogeneity. In contrast, GHP and HFM methods show closer agreement when specimen geometry and stabilization requirements are satisfied. The influence of contact resistance, probing depth, specimen preparation, and uncertainty propagation is critically analyzed for each technique. The study provides practical insights into the applicability limits of commonly used thermal characterization methods and emphasizes the importance of selecting measurement techniques in relation to material morphology and testing constraints. These findings support more reliable thermal property assessment of emerging insulation materials and contribute to improved consistency between laboratory measurements and energy performance evaluations for buildings. Full article
(This article belongs to the Special Issue Built Environment and Sustainable Energy Efficiency)
Show Figures

Figure 1

25 pages, 2126 KB  
Article
Crying Wolf in Cyberspace: A Cybersecurity Dynamics Study of Alarm Fatigue Attacks
by Enrico Barbierato
Information 2026, 17(5), 434; https://doi.org/10.3390/info17050434 - 1 May 2026
Abstract
Modern cyber–physical infrastructures rely heavily on alarm and notification systems to direct human attention when abnormal conditions occur. These mechanisms support timely and safe responses by informing operators and occupants about potential hazards. At the same time, research in human factors has shown [...] Read more.
Modern cyber–physical infrastructures rely heavily on alarm and notification systems to direct human attention when abnormal conditions occur. These mechanisms support timely and safe responses by informing operators and occupants about potential hazards. At the same time, research in human factors has shown that repeated or excessive alerts can weaken vigilance, slow reactions, and reduce confidence in warning systems. This behavioral pattern is commonly described as alarm fatigue. This paper examines how that vulnerability can be exploited intentionally. We refer to this adversarial strategy as alarm poisoning: the deliberate injection of false or misleading alerts in order to increase alarm pressure, erode trust in the monitoring infrastructure, and degrade organizational responsiveness over time. To study this process, we develop a stochastic Cybersecurity Dynamics model representing the interaction among attackers, defenders, alarm infrastructure, and a population of employees. Employee behavior is modeled through evolving trust and fatigue levels, while the overall system is formulated as a continuous–time Markov chain and simulated using the Gillespie Stochastic Simulation Algorithm. A Monte–Carlo campaign is used to analyze the resulting socio–technical dynamics under alternative attacker strategies. The study evaluates time-dependent trust, fatigue, and alarm-pressure trajectories, the distribution of times to behavioral collapse, and defender timing through Trust–Resilience–Agility–Mitigation (TRAM) metrics. The revised analysis also includes replication-sufficiency diagnostics, one-at-a-time sensitivity analysis, and threshold-robustness checks for the collapse criterion. The results show that false alarms with high perceived severity drive alarm pressure upward and degrade trust faster than nuisance-dominated campaigns, even when the total fake-alarm intensity is held constant across strategies. Collapse timing remains highly variable across stochastic realizations, and a non-negligible fraction of runs do not reach the collapse threshold within the simulation horizon. Sensitivity analysis indicates that the main qualitative ranking of attacker strategies is robust across most tested perturbations, with fatigue recovery and defender escalation emerging as particularly influential mechanisms. Overall, the findings support the view that alarm poisoning is a credible socio–technical attack vector and highlight the importance of rapid mitigation, robust alarm management, and human-centered defensive design in cyber–physical security systems. Full article
(This article belongs to the Special Issue Generative AI for Data Privacy and Anomaly Detection)
Show Figures

Figure 1

27 pages, 1264 KB  
Article
Synthetic Minority Oversampling for Imbalanced Time Series Classification Based on Path Signature
by Mohnad Abunada, Samir Brahim Belhaouari and Halima Bensmail
Appl. Sci. 2026, 16(9), 4451; https://doi.org/10.3390/app16094451 - 1 May 2026
Abstract
Imbalanced class distributions hinder time series classifiers by underrepresenting rare yet important events. We introduce Path Signature Synthetic Time-series Oversampling (PSSTO), a structure-preserving oversampling method that operates in path signature space to synthesize informative minority samples while pruning low-quality ones. Across 12 public [...] Read more.
Imbalanced class distributions hinder time series classifiers by underrepresenting rare yet important events. We introduce Path Signature Synthetic Time-series Oversampling (PSSTO), a structure-preserving oversampling method that operates in path signature space to synthesize informative minority samples while pruning low-quality ones. Across 12 public datasets, PSSTO with a random forest improves classification over conventional resampling approaches on average. Pairwise Wilcoxon signed-rank tests against these approaches indicate statistically significant gains. Compared with time series-specific oversamplers, PSSTO with random forest attains the best averages on F1, G-mean, and AUC compared to the strongest alternative. These results show that structure-preserving oversampling in signature space is an effective and broadly applicable remedy for imbalanced time-series classification. Full article
Show Figures

Figure 1

16 pages, 647 KB  
Article
BMI and Prognostic Nutritional Index Are Independently and Positively Associated with Three Year Glycemic Change in Non-Diabetic Adults: A Community-Based Cohort Study
by Yuting Yu, Li Chen, Wei Zhang, Lihua Jiang, Chunmin Zhang, Xiaoying Ni, Jianguo Yu and Yonggen Jiang
Nutrients 2026, 18(9), 1459; https://doi.org/10.3390/nu18091459 - 1 May 2026
Abstract
Background/Objectives: Both adiposity and nutritional–inflammatory status influence glucose metabolism; however, their longitudinal associations with glycemic changes in non-diabetic populations remain unclear. We examined the independent, interactive, and joint associations of body mass index (BMI) and prognostic nutritional index (PNI) with the 3-year [...] Read more.
Background/Objectives: Both adiposity and nutritional–inflammatory status influence glucose metabolism; however, their longitudinal associations with glycemic changes in non-diabetic populations remain unclear. We examined the independent, interactive, and joint associations of body mass index (BMI) and prognostic nutritional index (PNI) with the 3-year change in HbA1c (ΔHbA1c). PNI, a composite marker of serum albumin and peripheral lymphocyte count, reflects both protein nutritional status and systemic immune competence. We hypothesized that BMI and PNI would each independently predict ΔHbA1c and that their joint profiling would identify higher-risk subgroups. Methods: A total of 9414 non-diabetic adults from the Shanghai Suburban Adult Cohort were included. Participants with diabetes at baseline (defined as fasting plasma glucose ≥ 7.0 mmol/L, 2-h post-load glucose ≥ 11.1 mmol/L, HbA1c ≥ 6.5%, or self-reported physician diagnosis of diabetes or use of glucose-lowering medications) were excluded. BMI was measured, and PNI was calculated as serum albumin + 5 × lymphocyte count. ΔHbA1c was assessed over a 3-year period. Multivariable linear regression, interaction testing, and joint stratification were performed. Covariate selection was guided by prior biological plausibility, and model adequacy was evaluated using the Akaike Information Criterion (AIC). Results: Both BMI (β = 0.013% per kg/m2, 95% CI: 0.011–0.016, p < 0.001) and PNI (β = 0.002% per unit, 95% CI: 0.000–0.004, p = 0.019) were independently and positively associated with ΔHbA1c. No significant interaction was observed (p = 0.431). High BMI (≥24 kg/m2) was associated with glycemic worsening irrespective of PNI level (β ≈ 0.075%, p < 0.001). Among normal-weight individuals, higher PNI was associated with a modest increase in ΔHbA1c (β = 0.031%, p = 0.007). Conclusions: Although the absolute effect sizes were modest at the individual level, BMI was consistently and independently associated with glycemic deterioration therefore, even small per-unit increases may translate into meaningful risk at the population level given the high prevalence of overweight and obesity. PNI showed a small positive association, suggesting that in relatively healthy populations a higher PNI may partly capture subtle pro-glycemic factors—such as low-grade inflammation or higher protein intake—rather than representing unambiguous nutritional benefit. The absence of interaction suggests that BMI and PNI act through largely independent pathways. These findings extend prior evidence by demonstrating that PNI provides modest additional glycemic information beyond BMI in non-diabetic community-dwelling adults, particularly among those of normal weight. Full article
Show Figures

Figure 1

16 pages, 3675 KB  
Article
Performance of New Roche Cobas Pulse Glucose Meter Against Potential Interfering Substances and Hematocrit Variations
by Mokarrameh Pudineh Moarref, Wanda Black and Yu Chen
Diagnostics 2026, 16(9), 1383; https://doi.org/10.3390/diagnostics16091383 - 1 May 2026
Abstract
Background: Point-of-care (POC) glucometers are essential for rapid blood glucose monitoring but are subject to interference and hematocrit variations. This study evaluated the analytical performance of the new Cobas Pulse glucometer against the Accu-Chek Inform II meter in the presence of N-acetylcysteine [...] Read more.
Background: Point-of-care (POC) glucometers are essential for rapid blood glucose monitoring but are subject to interference and hematocrit variations. This study evaluated the analytical performance of the new Cobas Pulse glucometer against the Accu-Chek Inform II meter in the presence of N-acetylcysteine (NAC, 0.32–2.5 mmol/L), ascorbic acid (0.28–2.84 mmol/L), D-galactose (5.5–27 mmol/L), hemolysis (0.5–5 g/L hemoglobin), icterus (200–1600 μmol/L bilirubin), lipemia (2.5–15 g/L Intralipid), and hematocrit variations (20–60%). Methods: Interference testing followed CLSI EP07 guidelines using three whole blood pools with low (2.0–2.7 mmol/L), medium (4.5–7.4 mmol/L), and high (16.3–23 mmol/L) glucose levels. Interferents were spiked into these whole blood pools. Duplicate glucose levels were measured by 2 Pulse meters and 2 Inform II meters. The results were then assessed using the international standards, e.g., ISO 15197:2017 criteria (±15% or ±0.83 mmol/L). Results: Accu-Chek Inform II showed severe positive interference from galactose (up to 446.3%, p < 0.001), ascorbic acid (up to 98.8%, p = 0.002), and NAC (up to 61.4%, p = 0.001), exceeding ISO limits. Cobas Pulse demonstrated minimal interference (maximum biases: −3.7% for galactose, −4.4% for ascorbic acid, 7.7% for NAC, all p > 0.05). Both meters showed similar hematocrit-dependent bias (positive at 20–30%, negative at 50–60%) and acceptable performance for hemolysis, icterus (≤800 μmol/L), and lipemia. Conclusions: Compared to the Accu-Chek Inform II, the Cobas Pulse demonstrated greater resilience to interferences. Cobas Pulse meets strict accuracy standards (±10% for hospital use) with low interference, which makes it suitable for care of critically ill patients. The Cobas Pulse is more dependable for POCT across various clinical situations, supporting its role in critical care. Full article
(This article belongs to the Special Issue Recent Advances in Clinical Biochemistry, 2nd Edition)
Show Figures

Figure 1

Back to TopTop