Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,105)

Search Parameters:
Keywords = false negatives

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 985 KB  
Article
Too Early to Tell? Balancing Diagnostic Accuracy of Newborn Screening for Propionic Acidemia Versus a Timely Referral
by Nils W. F. Meijer, Hidde H. Huidekoper, Klaas Koop, Sabine A. Fuchs, M. Rebecca Heiner Fokkema, Charlotte M. A. Lubout, Andrea B. Haijer-Schreuder, Wouter F. Visser, Rendelien K. Verschoof-Puite, Eugènie Dekkers, Annet M. Bosch, Rose E. Maase and Monique G. M. de Sain-van der Velden
Int. J. Neonatal Screen. 2026, 12(1), 1; https://doi.org/10.3390/ijns12010001 - 24 Dec 2025
Abstract
In the Netherlands, the newborn screening (NBS) program includes screening for propionic aciduria (PA) and methylmalonic aciduria (MMA). When initial screening reveals elevated C3 concentrations or abnormal ratios (C3/C2, C3/C16), a second-tier test measuring methylcitric acid (MCA) for PA and methylmalonic acid (MMA [...] Read more.
In the Netherlands, the newborn screening (NBS) program includes screening for propionic aciduria (PA) and methylmalonic aciduria (MMA). When initial screening reveals elevated C3 concentrations or abnormal ratios (C3/C2, C3/C16), a second-tier test measuring methylcitric acid (MCA) for PA and methylmalonic acid (MMAmb) for MMA is performed. While this two-tier approach reduces false positives effectively, it can delay referral from the NBS program and diagnosis of propionic aciduria. We describe four early-onset PA cases in which the current Dutch screening algorithm negatively impacted clinical outcomes, highlighting the need for expedited referral. We investigated different alternative screening strategies to identify the most effective approach for improving timeliness, while maintaining the high specificity of Dutch PA NBS. This revised approach prioritizes the evaluation of the C3/C2 ratio in first-tier screening. Specifically, samples with a C3/C2 ratio ≥ 0.75 should be referred directly for medical consultation and confirmatory testing. For all other samples with less pronounced biochemical abnormalities, the existing two-tier screening algorithm remains an appropriate NBS protocol. To position our approach internationally, a survey of European NBS programs was conducted to compare screening and referral protocols for PA across the region. Full article
Show Figures

Figure 1

29 pages, 3643 KB  
Article
Optimizing Performance of Equipment Fleets Under Dynamic Operating Conditions: Generalizable Shift Detection and Multimodal LLM-Assisted State Labeling
by Bilal Chabane, Georges Abdul-Nour and Dragan Komljenovic
Sustainability 2026, 18(1), 132; https://doi.org/10.3390/su18010132 - 22 Dec 2025
Abstract
This paper presents OpS-EWMA-LLM (Operational State Shifts Detection using Exponential Weighted Moving Average and Labeling using Large Language Model), a hybrid framework that combines fleet-normalized statistical shift detection with LLM-assisted diagnostics to identify and interpret operational state changes across heterogeneous fleets. First, we [...] Read more.
This paper presents OpS-EWMA-LLM (Operational State Shifts Detection using Exponential Weighted Moving Average and Labeling using Large Language Model), a hybrid framework that combines fleet-normalized statistical shift detection with LLM-assisted diagnostics to identify and interpret operational state changes across heterogeneous fleets. First, we introduce a residual-based EWMA control chart methodology that uses deviations of each component’s sensor reading from its fleet-wide expected value to detect anomalies. This statistical approach yields near-zero false negatives and flags incipient faults earlier than conventional methods, without requiring component-specific tuning. Second, we implement a pipeline that integrates an LLM with retrieval-augmented generation (RAG) architecture. Through a three-phase prompting strategy, the LLM ingests time-series anomalies, domain knowledge, and contextual information to generate human-interpretable diagnostic insights. Finaly, unlike existing approaches that treat anomaly detection and diagnosis as separate steps, we assign to each detected event a criticality label based on both statistical score of the anomaly and semantic score from the LLM analysis. These labels are stored in the OpS-Vector to extend the knowledge base of cases for future retrieval. We demonstrate the framework on SCADA data from a fleet of wind turbines: OpS-EWMA successfully identifies critical temperature deviations in various components that standard alarms missed, and the LLM (augmented with relevant documents) provides rationalized explanations for each anomaly. The framework demonstrated robust performance and outperformed baseline methods in a realistic zero-tuning deployment across thousands of heterogeneous equipment units operating under diverse conditions, without component-specific calibration. By fusing lightweight statistical process control with generative AI, the proposed solution offers a scalable, interpretable tool for condition monitoring and asset management in Industry 4.0/5.0 settings. Beyond its technical contributions, the outcome of this research is aligned with the UN Sustainable Development Goals SDG 7, SDG 9, SDG 12, SDG 13. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

17 pages, 4244 KB  
Article
ToF-SIMS Reveals Metformin-Driven Restoration of Hepatic Lipid and Amino Acid Profiles in a Type 2 Diabetes Rat Model
by Magdalena E. Skalska, Michalina Kaźmierczak, Marcela Capcarova, Anna Kalafova, Klaudia Jaszcza and Dorota Wojtysiak
Int. J. Mol. Sci. 2026, 27(1), 105; https://doi.org/10.3390/ijms27010105 - 22 Dec 2025
Viewed by 24
Abstract
Diabetes mellitus profoundly disturbs hepatic metabolism by impairing lipid and amino acid homeostasis, yet spatially resolved molecular evidence of these alterations remains limited. This study employed Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) to visualise and quantify metabolic remodelling in rat liver under diabetic [...] Read more.
Diabetes mellitus profoundly disturbs hepatic metabolism by impairing lipid and amino acid homeostasis, yet spatially resolved molecular evidence of these alterations remains limited. This study employed Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) to visualise and quantify metabolic remodelling in rat liver under diabetic conditions and following metformin treatment. Liver cryosections from lean controls (LEAN), diabetic rats (P1), and metformin-treated diabetic rats (P2) were analysed in the negative ion mode, and all spectra were normalised to total ion counts. One-way ANOVA with false discovery rate (FDR) correction identified 43 lipid-related and 20 amino acid-related ions with significant group differences. Diabetic livers exhibited a marked depletion of phospholipid- and fatty acid-related ions (e.g., m/z 241.04, 281.25, 536.38) accompanied by increased ceramide fragments (m/z 805–806), indicating lipotoxic remodelling and mitochondrial stress. Simultaneously, aromatic and neutral amino acids such as phenylalanine, tyrosine, and glutamine were reduced, while small acidic fragments were elevated, consistent with enhanced proteolysis and gluconeogenic flux. Metformin administration partially restored both lipid and amino acid profiles toward the control phenotype. Hierarchical clustering and spatial ion maps revealed distinct group separation and partial normalisation of hepatic molecular patterns. These results demonstrate that ToF-SIMS provides label-free, spatially resolved insights into diabetes-induced metabolic disturbances and metformin-driven hepatoprotection. Full article
(This article belongs to the Section Molecular Endocrinology and Metabolism)
Show Figures

Graphical abstract

15 pages, 761 KB  
Article
The Accuracy of Video-Assisted Thoracic Surgery Pleural Biopsy in Patients with Suspected Diffuse Pleural Mesothelioma: A Real-Life Study
by Ludovica Balsamo, Enrica Migliore, Eleonora Della Beffa, Luisa Delsedime, Paolo Olivo Lausi, Daniela Di Cuonzo, Filippo Lococo, Paraskevas Lyberis, Dario Mirabelli, Mauro Giulio Papotti, Enrico Ruffini and Francesco Guerrera
J. Clin. Med. 2026, 15(1), 42; https://doi.org/10.3390/jcm15010042 - 20 Dec 2025
Viewed by 165
Abstract
Background: The heritage of occupational and environmental asbestos exposure in Piedmont, Italy, has resulted in an enduring diffuse pleural mesothelioma (DPM) epidemic. Our study aimed to investigate the accuracy of Pleural biopsy (PB) via thoracoscopy (or video-assisted thoracic surgery—VATS) and analyze the [...] Read more.
Background: The heritage of occupational and environmental asbestos exposure in Piedmont, Italy, has resulted in an enduring diffuse pleural mesothelioma (DPM) epidemic. Our study aimed to investigate the accuracy of Pleural biopsy (PB) via thoracoscopy (or video-assisted thoracic surgery—VATS) and analyze the diagnostic path of patients who experienced an initial DPM misdiagnosis. Methods: Patients who underwent PB by VATS for suspected DPM from 2004 to 2013 were analyzed. The Registry of Malignant Mesothelioma (RMM) records were examined to cross-check incident cases and identify misdiagnosed DPM. The sensitivity and specificity of the initial PB assessment versus the final classification of cases by RMM were evaluated. Results: Data from 552 patients were analyzed, and DPM was diagnosed in 178 cases (32%). Sensitivity and specificity were 93% and 100%, respectively. The number of false-negative PBs was 14 (2%). Of those, 10 (71%) had an initial diagnosis of chronic pleuritis, 3 (28.5%) were initially classified as mesothelial proliferation, and 1 had reactive mesothelial proliferation. All of them reported a history of asbestos exposure, and the correct diagnosis was reached after a median of 160 days. One- and four-year survival rates were 52% and 10% in DPM PB-positive cases and 50% and 19% in false-negative cases. Conclusions: When a strong clinical suspicion after a negative PB remains, iterative biopsy attempts should be considered, especially if a history of asbestos exposure is reported. In high-volume centers, the DPM misdiagnosis rate remains low, and future advancements in diagnostic technologies could further increase the accuracy and efficacy of histologic diagnosis. Full article
(This article belongs to the Special Issue Thoracic Surgery Between Tradition and Innovations)
Show Figures

Figure 1

14 pages, 1256 KB  
Article
High-Accuracy Serodiagnosis of African Swine Fever Using P72 and P30-Based Lateral Flow Assays: A Validation Study with Field Samples in Thailand
by Nitipon Srionrod, Supphathat Wutthiwitthayaphong, Teera Nipakornpun and Sakchai Ruenphet
Vet. Sci. 2026, 13(1), 4; https://doi.org/10.3390/vetsci13010004 - 19 Dec 2025
Viewed by 72
Abstract
African Swine Fever (ASF) control is severely hampered by the reliance on slow, laboratory-bound diagnostics. While rapid, field-deployable lateral flow assays (LFAs) are urgently needed, the comparative performance of key single-antigen targets remains poorly characterized. This study aimed to develop and systematically evaluate [...] Read more.
African Swine Fever (ASF) control is severely hampered by the reliance on slow, laboratory-bound diagnostics. While rapid, field-deployable lateral flow assays (LFAs) are urgently needed, the comparative performance of key single-antigen targets remains poorly characterized. This study aimed to develop and systematically evaluate the diagnostic performance of three in-house single-antigen LFAs targeting ASF virus P30, P54, and P72, using swine field samples from Thailand, including a panel of 143 quantitative polymerase chain reaction-negative swine serum samples. The performance of each LFA was compared against a commercial multi-antigen (P32/P62/P72) indirect ELISA, which served as the reference standard, classifying 64 samples as positive and 79 as negative. The P72-based LFA demonstrated perfect diagnostic performance (100% sensitivity, 100% specificity) and perfect agreement (κ = 1.0) with the enzyme-linked immunosorbent assay (ELISA). Similarly, the P30 LFA demonstrated high performance (100% sensitivity, 98.7% specificity) with ‘Almost Perfect’ agreement (κ = 0.9859). In contrast, the P54 LFA was unsuitable, achieving 100% sensitivity but unacceptably low specificity (88.6%) due to a high rate of false positives. Overall, the single-antigen P72 and P30 LFAs demonstrated excellent concordance with the multi-antigen ELISA, supporting their reliable for detecting antibodies against ASFV. Although these assays do not replace molecular methods for acute infection detection, they represent valuable complementary tools for serosurveillance. Full article
(This article belongs to the Section Veterinary Microbiology, Parasitology and Immunology)
21 pages, 782 KB  
Article
Research on Binary Decompilation Optimization Based on Fine-Tuned Large Language Models for Vulnerability Detection
by Yidan Wang, Deming Mao, Ye Han and Rui Tao
Electronics 2026, 15(1), 8; https://doi.org/10.3390/electronics15010008 - 19 Dec 2025
Viewed by 159
Abstract
The proliferation of binary vulnerabilities in the software supply chain has become a critical security challenge. Existing vulnerability detection approaches—including dynamic analysis, static analysis, and decompilation-assisted analysis—all suffer from limitations such as insufficient coverage, high false-positive and false-negative rates, or poor compatibility. Although [...] Read more.
The proliferation of binary vulnerabilities in the software supply chain has become a critical security challenge. Existing vulnerability detection approaches—including dynamic analysis, static analysis, and decompilation-assisted analysis—all suffer from limitations such as insufficient coverage, high false-positive and false-negative rates, or poor compatibility. Although decompilation technology can serve as a bridge connecting binary-code and source-code vulnerability detection tools, current schemes suffer from inadequate semantic restoration quality and lack of tool compatibility. To address these issues, this paper proposes LLMVulDecompiler, a binary decompilation model based on fine-tuned large language models designed to generate high-precision decompiled code that integrates directly with source-code static analysis tools. We construct a dedicated training and evaluation dataset that covers multiple compiler optimization levels (e.g., O0–O3) and a diverse set of program functionalities. We adopt a two-stage fine-tuning strategy that involves first building foundational decompilation capabilities, then enhancing vulnerability-specific features. Additionally, we design a low-cost inference pipeline and establish multi-dimensional evaluation criteria, including restoration similarity, compilation success rate, and functional correctness. Experimental results show that the model significantly outperforms baseline models in terms of average edit distance, compilation success rate, and black-box test pass rate on the HumanEval-C benchmark. In tests on 12 real-world CVE (Common Vulnerabilities and Exposures) instances, the approach achieved a detection accuracy of 91.7%, with substantially reduced false-positive and false-negative rates. This study demonstrates the effectiveness of specialized fine-tuning of large language models for binary decompilation and vulnerability detection, offering a new pathway for binary security analysis. Full article
Show Figures

Figure 1

18 pages, 1515 KB  
Article
Botnet Node Detection Using Graph Learning
by Gizem Karyağdı and İlker Özçelik
Appl. Sci. 2026, 16(1), 24; https://doi.org/10.3390/app16010024 - 19 Dec 2025
Viewed by 87
Abstract
Botnets represent a persistent and significant threat to internet security. Many detection methods fail because they analyze isolated node data, neglecting the coordinated interactions of centrally managed bots. Graph-based methods, particularly Graph Neural Networks (GNNs), offer a promising solution. This study developed and [...] Read more.
Botnets represent a persistent and significant threat to internet security. Many detection methods fail because they analyze isolated node data, neglecting the coordinated interactions of centrally managed bots. Graph-based methods, particularly Graph Neural Networks (GNNs), offer a promising solution. This study developed and compared four novel GNN models (HeteroGCN, HeteroGAT, HeteroSAGE, and HeteroGAE) for botnet detection. We constructed a heterogeneous graph from the TI-16 DNS-labeled dataset, capturing interactions between users and domains. Experimental results show our models achieve up to 95% accuracy. Specifically, HeteroSAGE and HeteroGAE significantly outperform other models, demonstrating superior F1-Scores and exceptionally high Recall. This high recall, indicating a low false-negative rate, is critical for effective anomaly detection. Conversely, the computationally expensive HeteroGAT model yielded poorer results and slower inference times, demonstrating that increased model complexity does not guarantee better performance. To our knowledge, this is the first study to successfully apply and compare heterogeneous GNNs for bot detection using DNS query data. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

29 pages, 4226 KB  
Article
Interpretable Assessment of Streetscape Quality Using Street-View Imagery and Satellite-Derived Environmental Indicators: Evidence from Tianjin, China
by Yankui Yuan, Fengliang Tang, Shengbei Zhou, Yuqiao Zhang, Xiaojuan Li, Sen Wang, Lin Wang and Qi Wang
Buildings 2026, 16(1), 1; https://doi.org/10.3390/buildings16010001 - 19 Dec 2025
Viewed by 200
Abstract
Amid accelerating climate change, intensifying urban heat island effects, and rising public demand for livable, walkable streets, there is an urgent practical need for interpretable and actionable evidence on streetscape quality. Yet, research on streetscape quality has often relied on single data sources [...] Read more.
Amid accelerating climate change, intensifying urban heat island effects, and rising public demand for livable, walkable streets, there is an urgent practical need for interpretable and actionable evidence on streetscape quality. Yet, research on streetscape quality has often relied on single data sources and linear models, limiting insight into multidimensional perception; evidence from temperate monsoon cities remains scarce. Using Tianjin’s main urban area as a case study, we integrate street-view imagery with remote sensing imagery to characterize satellite-derived environmental indicators at the point scale and examine the following five perceptual outcomes: comfort, aesthetics, perceived greenness, summer heat perception, and willingness to linger. We develop a three-step interpretable assessment, as follows: Elastic Net logistic regression to establish directional and magnitude baselines; Generalized Additive Models with a logistic link to recover nonlinear patterns and threshold bands with Benjamini–Hochberg false discovery rate control and binned probability calibration; and Shapley additive explanations to provide parallel validation and global and local explanations. The results show that the Green View Index is consistently and positively associated with all five outcomes, whereas Spatial Balance is negative across the observed range. Sky View Factor and the Building Visibility Index display heterogeneous forms, including monotonic, U-shaped, and inverted-U patterns across outcomes; Normalized Difference Vegetation Index and Land Surface Temperature are likewise predominantly nonlinear with peak sensitivity in the midrange. In total, 54 of 55 smoothing terms remain significant after Benjamini–Hochberg false discovery rate correction. The summer heat perception outcome is highly imbalanced: 94.2% of samples are labeled positive. Overall calibration is good. On a standardized scale, we delineate optimal and risk intervals for key indicators and demonstrate the complementary explanatory value of street-view imagery and remote sensing imagery for people-centered perceptions. In Tianjin, a temperate monsoon megacity, the framework provides reproducible, actionable, design-relevant evidence to inform streetscape optimization and offers a template that can be adapted to other cities, subject to local calibration. Full article
Show Figures

Figure 1

11 pages, 1086 KB  
Article
An Algorithm for Rapid and Low-Cost Detection of Carbapenemases Directly from Positive Blood Cultures Using an Immunochromatographic Test
by Patricia del Carmen García, Pamela Rojas, Ana María Guzmán, Sofía Paz Torres and Aniela Wozniak
Antibiotics 2026, 15(1), 1; https://doi.org/10.3390/antibiotics15010001 - 19 Dec 2025
Viewed by 128
Abstract
Background/Objectives: Detection of carbapenemases (KPC, OXA-48, VIM, IMP, NDM) from blood cultures (BCs) by standard methods takes 48–72 h and includes BC seeding, susceptibility testing and carbapenemase detection. Automated qPCR panels provide results in 1 h but are very costly. We aim [...] Read more.
Background/Objectives: Detection of carbapenemases (KPC, OXA-48, VIM, IMP, NDM) from blood cultures (BCs) by standard methods takes 48–72 h and includes BC seeding, susceptibility testing and carbapenemase detection. Automated qPCR panels provide results in 1 h but are very costly. We aim to evaluate a low-cost and rapid immunochromatographic (IC) test directly from positive BCs using the reference method as a comparator. Methods: Ninety-one positive BCs from real-world patients and sixty-four simulated BCs were included. BC broth was treated with SDS and washed before analysis with the K.N.I.V.O. carbapenemase detection IC test. Discordant results were confirmed through the NG Carba-5 IC test and GeneXpert Carba-R qPCR test. Results: The test detected 100% of the 87 carbapenemase-producing BCs tested (sensitivity: 100% [CI95%: 95.8–100%]). However, 13 BCs generated false positive bands for NDM and/or OXA-48 (specificity: 80.8% [CI95%: 69.5–89.4%). The positive and negative predictive values were 87.0% (CI95%: 80.4–91.6%) and 100% (CI95%: 93.5–100%). Analysis of BCs providing false positive results through both confirmatory tests showed that BCs were negative for these carbapenemases. Conclusions: This is the first evaluation of the K.N.I.V.O. IC test directly from positive BCs, with a pragmatic confirmation algorithm using a second IC test or qPCR in case of NDM or OXA-48, that addresses K.N.I.V.O.’s specificity gap. The main limitation of this work is that confirmatory testing was performed only in false positives. The implementation of the K.N.I.V.O. IC test would contribute to early carbapenemase detection in BCs and is an alternative for low-resource hospitals where qPCR panels are not available. Full article
Show Figures

Figure 1

26 pages, 1053 KB  
Article
FastTree-Guided Genetic Algorithm for Credit Scoring Feature Selection
by Rashed Bahlool, Nabil Hewahi and Youssef Harrath
Computers 2025, 14(12), 566; https://doi.org/10.3390/computers14120566 - 18 Dec 2025
Viewed by 179
Abstract
Feature selection is pivotal in enhancing the efficiency of credit scoring predictions, where misclassifications are critical because they can result in financial losses for lenders and exclusion of eligible borrowers. While traditional feature selection methods can improve accuracy and class separation, they often [...] Read more.
Feature selection is pivotal in enhancing the efficiency of credit scoring predictions, where misclassifications are critical because they can result in financial losses for lenders and exclusion of eligible borrowers. While traditional feature selection methods can improve accuracy and class separation, they often struggle to maintain consistent performance aligned with institutional preferences across datasets of varying size and imbalance. This study introduces a FastTree-Guided Genetic Algorithm (FT-GA) that combines gradient-boosted learning with evolutionary optimization to prioritize class separability and minimize false-risk exposure. In contrast to traditional approaches, FT-GA provides fine-grained search guidance by acknowledging that false positives and false negatives carry disproportionate consequences in high-stakes lending contexts. By embedding domain-specific weighting into its fitness function, FT-GA favors separability over raw accuracy, reflecting practical risk sensitivity in real credit decision settings. Experimental results show that FT-GA achieved similar or higher AUC values ranging from 76% to 92% while reducing the average feature set by 21% when compared with the strongest baseline techniques. It also demonstrated strong performance on small to moderately imbalanced datasets and more resilience on highly imbalanced ones. These findings indicate that FT-GA offers a risk-aware enhancement to automated credit assessment workflows, supporting lower operational risk for financial institutions while showing potential applicability to other high-stakes domains. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

20 pages, 2711 KB  
Article
Validation Testing of Continuous Laser Methane Monitoring at Operational Oil and Gas Production Facilities
by Caroline B. Alden, Doug Chipponeri, David Youngquist, Brad Krough, Amanda Makowiecki, David Wilson and Gregory B. Rieker
Atmosphere 2025, 16(12), 1409; https://doi.org/10.3390/atmos16121409 - 18 Dec 2025
Viewed by 184
Abstract
Methane emissions at oil and gas facilities can be measured in real time with continuous monitoring systems that alert operators of upset conditions, including fugitive emissions. We report on extensive operator field testing of a continuous laser monitoring system in ~year-long deployments at [...] Read more.
Methane emissions at oil and gas facilities can be measured in real time with continuous monitoring systems that alert operators of upset conditions, including fugitive emissions. We report on extensive operator field testing of a continuous laser monitoring system in ~year-long deployments at 46 oil and gas sites in two U.S. basins. The operator assessed periods of non-alerts with daily optical gas imaging sweeps to confirm emission status. Detection precision was 98% and false positive and negative rates were 3%. Quantification of challenge-controlled release tests at active oil and gas sites yielded a measured versus true emissions curve with slope = 1.2, R2 = 0.90. Repeatability test measurements of four production facilities with two different laser systems showed 33.9% average quantification agreement. Separate third-party blind controlled release testing at two state-of-the-art test facilities yielded 100% true positive rate (0 false negatives). Combining the third-party blind tests with field tests, emission rate quantification uncertainty was +/−41% across five orders of magnitude. These varied evaluation approaches validate the measurement system and operator integration of data for measurement and monitoring of upstream oil and gas emissions and demonstrate a test regime for vetting of monitoring and measurement technologies in active oil and gas operations. Full article
Show Figures

Figure 1

10 pages, 791 KB  
Proceeding Paper
Data-Driven Approach for Asthma Classification: Ensemble Learning with Random Forest and XGBoost
by Bhavana Santosh Pansare, Anagha Deepak Kulkarni and Priyanka Prabhakar Pawar
Comput. Sci. Math. Forum 2025, 12(1), 3; https://doi.org/10.3390/cmsf2025012003 - 17 Dec 2025
Abstract
Across the world, asthma is a prominent and widespread respiratory disorder that has a substantial clinical and socioeconomic influence. The classification of asthma subtypes should be performed precisely and effectively, with objectives such as personalized treatments, improved rehabilitation outcomes, and preventing tragic exacerbations. [...] Read more.
Across the world, asthma is a prominent and widespread respiratory disorder that has a substantial clinical and socioeconomic influence. The classification of asthma subtypes should be performed precisely and effectively, with objectives such as personalized treatments, improved rehabilitation outcomes, and preventing tragic exacerbations. Typical screening approaches are primarily based on spirometry measures, immunologic assessments, and individual clinical diagnoses, and they are commonly affected by limitations such as uncertainty, crossover disparities, and restricted generalizability among various groups of patients. This study utilizes machine learning (ML) methodologies as a Data-Driven Approach (DDA)-based framework for asthma classification to overcome the mentioned challenges. Methodically constructed and evaluated classifiers, such as Random Forest and XGBoost, use the Asthma Disease Dataset from Kaggle, which consists of demographic data, lung function metrics (FEV1, FVC, FEV1/FVC ratio, and PEFR), and immunoglobulin E (IgE) biomarkers. A wide range of metrics such as accuracy, precision, recall, F1-score, receiver operating characteristic area under the curve (ROC-AUC), and average precision (AP) are used exhaustively to assess the performance of the model. The results indicate that though each model exhibits outstanding forecasting abilities, XGBoost has an enhanced classification capability, especially in recall and AP, which minimizes the proportion of false negatives, resulting in a clinically noteworthy result. The significance of the FEV1/FVC ratio, IgE levels, and PEFR as key indicators is recognized by feature interpretability analysis. These results emphasize the ability of ML-powered evaluation in advancing personalized healthcare and revolutionizing the clinical management of asthma. Full article
Show Figures

Figure 1

19 pages, 444 KB  
Article
Enhancing Cascade Object Detection Accuracy Using Correctors Based on High-Dimensional Feature Separation
by Andrey V. Kovalchuk, Andrey A. Lebedev, Olga V. Shemagina, Irina V. Nuidel, Vladimir G. Yakhno and Sergey V. Stasenko
Technologies 2025, 13(12), 593; https://doi.org/10.3390/technologies13120593 - 16 Dec 2025
Viewed by 225
Abstract
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can [...] Read more.
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can be attached to an existing detector without retraining it. The key idea is to exploit the blessing of dimensionality: high-dimensional feature vectors constructed from multiple cascade stages are transformed by PCA and whitening into a space where simple linear Fisher discriminants can reliably separate rare error patterns from normal operation using only a few labeled examples. This study presents a novel algorithm designed to correct the outputs of object detectors constructed using the Viola–Jones framework enhanced with a modified census transform. The proposed method introduces several improvements addressing error correction and robustness in data-limited conditions. The approach involves image partitioning through a sliding window of fixed aspect ratio and a modified census transform in which pixel intensity is compared to the mean value within a rectangular neighborhood. Training samples for false negative and false positive correctors are selected using dual Intersection-over-Union (IoU) thresholds and probabilistic sampling of true positive and true negative fragments. Corrector models are trained based on the principles of high-dimensional separability within the paradigm of one- and few-shot learning, utilizing features derived from cascade stages of the detector. Decision boundaries are optimized using Fisher’s rule, with adaptive thresholding to guarantee zero false acceptance. Experimental results indicate that the proposed correction scheme enhances object detection accuracy by effectively compensating for classifier errors, particularly under conditions of scarce training data. On two railway image datasets with only about one thousand images each, the proposed correctors increase Precision from 0.36 to 0.65 on identifier detection while maintaining high Recall (0.98 → 0.94), and improve digit detection Recall from 0.94 to 0.98 with negligible loss in Precision (0.92 → 0.91). These results demonstrate that even under scarce training data, high-dimensional feature separation enables effective one-/few-shot error correction for cascade detectors with minimal computational overhead. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
Show Figures

Figure 1

27 pages, 1148 KB  
Article
LLM-Assisted Financial Fraud Detection with Reinforcement Learning
by Ahmed Djalal Hacini, Mohamed Benabdelouahad, Ishak Abassi, Sohaib Houhou, Aissa Boulmerka and Nadir Farhi
Algorithms 2025, 18(12), 792; https://doi.org/10.3390/a18120792 - 15 Dec 2025
Viewed by 335
Abstract
Effective financial fraud detection requires systems that can interpret complex transaction semantics while dynamically adapting to asymmetric operational costs. We propose a hybrid framework in which a large language model (LLM) serves as an encoder, transforming heterogeneous transaction data into a unified embedding [...] Read more.
Effective financial fraud detection requires systems that can interpret complex transaction semantics while dynamically adapting to asymmetric operational costs. We propose a hybrid framework in which a large language model (LLM) serves as an encoder, transforming heterogeneous transaction data into a unified embedding space. These embeddings define the state representation for a reinforcement learning (RL) agent, which acts as a fraud classifier optimized with business-aligned rewards that heavily penalize false negatives while controlling false positives. We evaluate the approach on two benchmark datasets—European Credit Card Fraud and PaySim—demonstrating that policy-gradient methods, particularly A2C, achieve high recall without sacrificing precision. Critically, our ablation study reveals that this hybrid architecture yields substantial performance gains on semantically rich transaction logs, whereas the advantage diminishes on mathematically compressed, anonymized features. Our results highlight the potential of coupling LLM-driven representations with RL policies for cost-sensitive and adaptive fraud detection. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 1740 KB  
Article
Exploring Hardware Vulnerabilities in Robotic Actuators: A Case of Man-in-the-Middle Attacks
by Raúl Jiménez Naharro, Fernando Gómez-Bravo and Rafael López de Ahumada Gutiérrez
Electronics 2025, 14(24), 4909; https://doi.org/10.3390/electronics14244909 - 14 Dec 2025
Viewed by 253
Abstract
One of the main vulnerabilities in robotic systems lies in the communication buses that enable low-level controllers to interact with the actuators responsible for the robot’s movements. In this context, hardware attacks represent a significant threat; however, the hardware version of the man-in-the-middle [...] Read more.
One of the main vulnerabilities in robotic systems lies in the communication buses that enable low-level controllers to interact with the actuators responsible for the robot’s movements. In this context, hardware attacks represent a significant threat; however, the hardware version of the man-in-the-middle attack, implemented by Trojan hardware, has not yet been extensively studied. This article examines the impact of such threats on robotic control systems, focusing on vulnerabilities in an asynchronous communication bus used to transmit commands to a digital servomotor. To explore this, Trojan hardware was implemented on an FPGA device (XC7A100T, AMD: Santa Clara, CA, USA). Furthermore, the article proposes and implements detection methods to identify this type of attack, integrating them into an FPGA device as part of the actuator. The method is based on measuring the answer time detecting the presence of a strange module due to an increase in this time considering an AX-12 servomotor (Robotis: Seoul, Republic of Korea), with a Dynamixel protocol. This approach has been validated through a series of experiments involving a large number of transmitted messages, resulting in a high rate of true positives and a low rate of false negatives. The main conclusion is that response time can be used to detect foreign modules in the system, even if the module is kept waiting to attack, as long as the condition that the servomotors have a low variation in their latency is met. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

Back to TopTop