Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (64)

Search Parameters:
Keywords = confidence amplification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2457 KB  
Article
Adaptive Label Reweighting via Boundary-Aware Meta Learning for Long-Tail Legal Element Recognition
by Kun Han, Chengcheng Han and Pengcheng Zhao
Symmetry 2026, 18(4), 664; https://doi.org/10.3390/sym18040664 - 16 Apr 2026
Viewed by 91
Abstract
Legal element recognition, which identifies discrete factual elements in Chinese court judgments to support judicial analysis and case retrieval, faces a severe long-tail challenge: head-to-tail label-frequency ratios exceed 100:1, and over 60% of sentences carry no label, starving rare elements of training signal. [...] Read more.
Legal element recognition, which identifies discrete factual elements in Chinese court judgments to support judicial analysis and case retrieval, faces a severe long-tail challenge: head-to-tail label-frequency ratios exceed 100:1, and over 60% of sentences carry no label, starving rare elements of training signal. Static reweighting methods assign fixed weights prior to training and cannot respond to the model’s evolving confidence; sample-level meta-learning couples all co-occurring label gradients to a single scalar, preventing independent tail-label amplification. We propose BML-Trans, a boundary-aware meta-learning framework that addresses both limitations. A label-wise meta-weighting mechanism maintains per-label gradient weights updated via bilevel hypergradient descent, decoupling tail-label amplification from co-occurring head labels. A boundary-aware meta-set concentrates calibration signal on high-uncertainty, tail-triggering sentences rather than on easy negatives, and a lightweight Multi-Scale Adapter sharpens the warm-up probability estimates on which boundary selection depends. Concretely, BML-Trans achieves an average Avg-F1 of 82.5% on CAIL2019 across the labor, divorce, and loan domains, outperforming the strongest baseline by 1.2 percentage points overall and by up to 5.7 percentage points on tail-label Macro-F1, at only 14% additional training cost. Ablation confirms a cascade dependency among the three components, establishing that the gains are structural rather than incidental to threshold selection or initialization. Full article
18 pages, 1145 KB  
Article
Genetic Associations of Parkinson’s Disease Clinical, Pathological, and Data-Driven Subtypes
by Ahmed Negida, Moaz Elsayed Abouelmagd, Belal Mohamed Hamed, Yousef Hawas, Aya Dziri, Yasmin Negida, Brian D. Berman and Matthew J. Barrett
Genes 2026, 17(4), 449; https://doi.org/10.3390/genes17040449 - 13 Apr 2026
Viewed by 384
Abstract
Background: Parkinson’s disease (PD) is clinically heterogeneous, yet the genetic architecture underlying this heterogeneity remains incompletely understood. We examined the genetic correlates of four complementary PD subtyping frameworks: the clinical motor subtype (tremor-dominant [TD] vs. postural instability/gait difficulty [PIGD]), alpha-synuclein seed amplification assay [...] Read more.
Background: Parkinson’s disease (PD) is clinically heterogeneous, yet the genetic architecture underlying this heterogeneity remains incompletely understood. We examined the genetic correlates of four complementary PD subtyping frameworks: the clinical motor subtype (tremor-dominant [TD] vs. postural instability/gait difficulty [PIGD]), alpha-synuclein seed amplification assay status (SAA+ vs. SAA−), the pathological subtype (brain-first vs. body-first, based on the presence of REM sleep behavior disorder), and the data-driven subtype (diffuse malignant [DM] vs. mild-motor predominant [MMP] vs. intermediate [IM]). Methods: We analyzed 1390 PD patients from the Parkinson’s Progression Markers Initiative (PPMI) with genotypes available for seven PD-associated genes (LRRK2, GBA1, SNCA, PRKN, PINK1, PARK7, VPS35), including specific variant resolutions (LRRK2 G2019S, R1441G/C/H; GBA1 N409S, severe variants; SNCAA53T), and APOE (ε2/ε3/ε4 alleles). Genetic variant frequencies were compared across subtypes using chi-square or Fisher’s exact tests with the Benjamini–Hochberg false discovery rate (FDR) correction. Effect sizes were quantified using Cramér’s V. multivariable logistic regression estimated adjusted odds ratios with Wald-based 95% confidence intervals. Results: Among genotyped PD patients, LRRK2 carriers constituted 13.7% (190/1390; 170 G2019S, 18 R1441G/C/H), GBA1 8.6% (119/1390; 96 N409S, 23 severe), and SNCA 2.0% (28/1390; all A53T). APOE ε4 carriers comprised 23.4% (323/1380). SAA-negative patients were markedly enriched for LRRK2 variants (37.1% vs. 10.2%, p = 3.7 × 10−19, q < 0.001, V = 0.25), specifically G2019S (28.5% vs. 9.6%, p = 4.9 × 10−11, q < 0.001) and R1441G/C/H (7.9% vs. 0.5%, p = 2.7 × 10−12, q < 0.001). Body-first PD was enriched for GBA1 carriers (12.3% vs. 6.7%, p = 0.004, q = 0.021) and had less LRRK2 carriers (7.9% vs. 15.0%, p = 0.002, q = 0.013). The DM subtype had the highest GBA1 frequency (14.0% vs. MMP 5.9%, p < 0.001, q = 0.003). After FDR correction, 10 out of 48 univariate tests remained significant. Clinical subtypes (TD vs. PIGD) showed only nominal LRRK2 differences that did not survive FDR correction. The APOE genotype did not differ across any framework. Conclusions: PD subtypes defined by alpha-synuclein pathology (SAA), pathological onset pattern (brain-first/body-first), and data-driven classification (DM/MMP/IM) show distinct genetic profiles that survive multiple comparison correction. LRRK2 variants strongly associate with SAA negativity (V = 0.25); GBA1 variants associate with the severe body-first onset and the diffuse malignant subtype. Full article
(This article belongs to the Special Issue Utilizing Multi-Omics to Investigate Neurodegenerative Disorders)
Show Figures

Figure 1

33 pages, 6529 KB  
Article
Probabilistic Orchestrator for Indeterministic Multi-Agent Systems in Real-Time Environments
by Arkady Bovshover, Andrei Kojukhov and Ilya Levin
Algorithms 2026, 19(4), 261; https://doi.org/10.3390/a19040261 - 29 Mar 2026
Viewed by 354
Abstract
Multi-agent perception systems must operate under fundamental asymmetries: some agents provide fast but unreliable observations, while others deliver higher-quality evidence with delay and uncertain correspondence. Traditional deterministic orchestration and rule-based fusion struggle to manage these trade-offs, often producing brittle or unstable behavior. We [...] Read more.
Multi-agent perception systems must operate under fundamental asymmetries: some agents provide fast but unreliable observations, while others deliver higher-quality evidence with delay and uncertain correspondence. Traditional deterministic orchestration and rule-based fusion struggle to manage these trade-offs, often producing brittle or unstable behavior. We introduce a probabilistic orchestration framework that treats coordination as an epistemic generation problem—constructing and updating belief states under uncertainty—rather than a selection problem. Instead of committing to a single agent’s output, the orchestrator constructs a belief state that explicitly represents uncertainty, evidential provenance, and temporal relevance. Decisions are produced through latency-aware, association-weighted fusion, and uncertainty itself becomes a first-class signal governing action, deferral, and learning. Crucially, the orchestrator enables controlled teacher–student adaptation: high-confidence, well-associated stationary observations are gated into a feedback loop that improves ego perception over time while mitigating error amplification. We demonstrate the approach on an infrastructure-assisted dual-camera obstacle-recognition task. Experimental results show improved robustness to distance, occlusion, and delayed evidence compared to ego-only and deterministic orchestration baselines. By operationalizing orchestration as epistemic generation, this work provides a unifying framework for robust decision-making and safe adaptation in multi-agent systems, with implications that extend beyond perception to agentic and generative AI architectures. Full article
Show Figures

Figure 1

64 pages, 8530 KB  
Review
Smart Medical Image Processing System Based on Explainable and Generative Artificial Intelligence: A Comprehensive Review
by Cosmin George Nicolăescu, Florentina Magda Enescu, Alin Gheorghiță Mazăre, Nicu Bizon and Cristian Toma
Algorithms 2026, 19(4), 244; https://doi.org/10.3390/a19040244 - 24 Mar 2026
Viewed by 446
Abstract
In recent years, the integration of advanced methods in medical imaging has become a major topic of interest due to its potential to enhance diagnostic accuracy, improve clinical efficiency, and increase specialists’ confidence in Artificial Intelligence (AI)-based decision-making. This paper explores the synthesis [...] Read more.
In recent years, the integration of advanced methods in medical imaging has become a major topic of interest due to its potential to enhance diagnostic accuracy, improve clinical efficiency, and increase specialists’ confidence in Artificial Intelligence (AI)-based decision-making. This paper explores the synthesis of Explainable AI (XAI) and Generative AI (GAI) in medical imaging, highlighting the advantages and challenges of these emerging technologies. The objective of this paper is to explore how the combined use of XAI and GAI contributes both to interpretability and to diagnostic accuracy. This research represents a systematic literature review conducted in accordance with PRISMA 2020, based on searches carried out in the PubMed, Scopus, IEEE Xplore, MDPI and ScienceDirect databases. Thus, a comprehensive overview of the integration of XAI and GAI in medical imaging is presented, based on recent studies and validated clinical applications. The advantages of combining transparency and data amplification in diagnostic models are highlighted, demonstrating their complementary roles in improving diagnosis using medical imaging. Ongoing challenges in clinical adoption are also emphasised, including interpretability and the need for validated assessment metrics. Beyond technological benefits, the paper also underlines the importance of ethical and legal considerations in the use of XAI and GAI in medical imaging. Based on the detailed analysis of the investigated studies, the paper also proposes a visual and architectural system concept intended for medical imaging, oriented towards research into the development of a unified system capable of detecting multiple types of pathologies. This research provides a detailed perspective on how XAI and GAI can revolutionise medical imaging by optimising data interpretation, enhancing human-AI collaboration, and increasing patient safety. Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning in Medical Imaging Diagnostics)
Show Figures

Figure 1

22 pages, 3785 KB  
Article
Determination and Analysis of Martian Height Anomalies Using GMM-3 and JGMRO_120D Gravity Field Models
by Dongfang Zhao, Houpu Li and Shaofeng Bian
Appl. Sci. 2026, 16(6), 2982; https://doi.org/10.3390/app16062982 - 19 Mar 2026
Viewed by 285
Abstract
Height anomaly, defined as the separation between the quasi-geoid and the reference ellipsoid, is fundamental to quasi-geoid refinement. While the Goddard Mars Model-3 (GMM-3) developed by NASA’s Goddard Space Flight Center (GSFC) and the JPL Mars gravity field MRO120D (JGMRO_120D) model developed by [...] Read more.
Height anomaly, defined as the separation between the quasi-geoid and the reference ellipsoid, is fundamental to quasi-geoid refinement. While the Goddard Mars Model-3 (GMM-3) developed by NASA’s Goddard Space Flight Center (GSFC) and the JPL Mars gravity field MRO120D (JGMRO_120D) model developed by NASA’s Jet Propulsion Laboratory (JPL) stand as two representative Martian gravity field models, the systematic differences between them and their associated physical implications remain insufficiently quantified. This study establishes a validated computational framework for Martian height anomaly determination using updated physical parameters and spherical harmonic expansions. Validation against terrestrial datasets confirms high reliability (standard deviation: 0.0695 m relative to International Centre for Global Earth Models (ICGEM)), ensuring confidence in subsequent analysis. Our analysis reveals three critical findings: (1) Systematic latitudinal biases between GMM-3 and JGMRO_120D exhibit a monotonic gradient from −1.3 m near the equator to +3.9 m at the North Pole, suggesting differential parameterization of polar mass loading or tidal models between the two centers. (2) Polar clustering of uncertainties and outliers exceeding the 95th percentile (>7 m) concentrate non-randomly at latitudes >60°, which is attributed to sparse satellite tracking and seasonal ice cap modeling limitations. (3) There is error amplification in lowland terrains, where relative errors exceed 60% in flat regions (near-zero anomalies), posing critical risks for precision landing missions. While global consistency between models is high (R2 = 0.9999), the identified discrepancies provide new constraints on Mars’s geophysical models and essential guidance for future gravity field improvements and mission planning. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

30 pages, 1066 KB  
Article
Socio-Cognitive Dynamics in Sustainable Water Product Markets: A Constructivist Grounded Theory Study of Korea’s Bottled and Purified Water Industries
by Dong Hawn Kim, Jeong-Eun Park and Sungho Lee
Sustainability 2026, 18(6), 3038; https://doi.org/10.3390/su18063038 - 19 Mar 2026
Viewed by 327
Abstract
This study employs a constructivist grounded theory approach based on 69 in-depth interviews conducted between March 2022 and December 2023 to examine socio-cognitive dynamics in Korea’s bottled water and household water purifier markets. The study addresses a gap in prior research by explaining [...] Read more.
This study employs a constructivist grounded theory approach based on 69 in-depth interviews conducted between March 2022 and December 2023 to examine socio-cognitive dynamics in Korea’s bottled water and household water purifier markets. The study addresses a gap in prior research by explaining how product meanings and stakeholder strategies co-evolve across adjacent “safe-water” markets under regulatory and sustainability pressures. Drawing on qualitative data from 69 stakeholders, including producers (n = 30), consumers (n = 19), and institutional experts (n = 20), we analyze how distrust, risk perception, and health consciousness reshape conceptual systems and market strategies. These shifts drive innovation across markets, including new technologies, service models, and branding strategies. The findings show that socio-cognitive stabilization arises through iterative interactions among institutional shocks, producer reinterpretation, and consumer adaptation. In the bottled water market, the meanings of “natural purity” became materially embedded in packaging, mineral labeling, and brand narratives. In the purifier sector, “technological reliability” was institutionalized through service-based maintenance systems and visible quality control technologies. These processes developed within asymmetric communicative environments shaped by corporate branding capacity and media amplification. This study refines socio-cognitive market theory by specifying boundary conditions under institutional distrust in developed economies. Although Republic of Korea possesses advanced drinking water infrastructure comparable to that of other developed economies, public confidence in tap water has periodically weakened following highly salient contamination incidents and regulatory transitions. This paradox provides a theoretically informative context for examining how product meanings and stakeholder behaviors mutually adapt over time. Although environmental impact metrics were not directly measured, the findings suggest that sustainability policies must address socio-cognitive trust dynamics alongside regulatory instruments such as plastic levies, certification schemes, and transparent risk communication. Full article
(This article belongs to the Special Issue Strategies for Sustainable Soil, Water and Environmental Management)
Show Figures

Figure 1

34 pages, 1728 KB  
Article
Time Left to Critical Climate Feedback/Loops: Annual Solar Geoengineering-PLUS, Pathways to Planetary Self-Cooling
by Alec Feinberg
Climate 2026, 14(2), 37; https://doi.org/10.3390/cli14020037 - 1 Feb 2026
Viewed by 946
Abstract
Global warming (GW) contributions from feedbacks and feedback loops are projected to rise from ≈54% (loops: 29%) in 2024 to ≈71% (loops: 50%) under faltering RCP pathways without Solar Geoengineering (SG) by about 2100. A critical threshold, RCP_Critical, defined as the point at [...] Read more.
Global warming (GW) contributions from feedbacks and feedback loops are projected to rise from ≈54% (loops: 29%) in 2024 to ≈71% (loops: 50%) under faltering RCP pathways without Solar Geoengineering (SG) by about 2100. A critical threshold, RCP_Critical, defined as the point at which feedback loops account for more than half of GW, is projected to occur between 2075 and 2125. Beyond this point, reversing warming becomes severely constrained, and climate tipping points become more likely. From these trends, an average mitigation difficulty and cost increase rate (MDCR) of ≈1.33–1.5% per year is estimated. By 2100, absent mitigation, the effort required to offset global warming would roughly double relative to today, approaching an unsustainable mitigation critical threshold. Current feedback levels may already be driving nonlinear warming behavior. These diagnostic estimates align with three key indicators: a minimum-feedback baseline from 1870, an equilibrium climate sensitivity (ECS) range of 3.1 °C–4.3 °C (potentially reached by ≈2082), and consistency with IPCC AR6 confidence bounds. In response, this study proposes Annual Solar Geoengineering-PLUS pathways (ASG+Ps) as supplemental measures. These include Earth Brightening, targeted Arctic Stratospheric Aerosol Injection (SAI), and feasible L1 Space Sunshade systems designed to reduce feedback amplification and extend mitigation timelines. The “PLUS” component refers to the use of increased mitigation levels with a focus on high-amplification regions, particularly the Arctic and the tropics, to help reverse local feedbacks and promote negative feedback loops. These moderate ASG+P pathways directly address AR6 concerns while avoiding many governance challenges of full-scale SG. ASG+Ps are less controversial and provide ≈14× stronger cooling potential per Wm−2 than Carbon Dioxide Removal (CDR), while allowing variable regional targeting. Meanwhile, RCP2.6 has already been missed, placing RCP4.5 and RCP6 at risk. In 2024, atmospheric CO2 rose by ≈23 Gt (≈3 ppm), while forest tree losses exceeded afforestation gains by 2×, yielding a 2 GtCO2 sink loss, further diminishing CDR’s effectiveness. Declines in planetary albedo since 1998 continue to amplify warming. Urbanization accounts for roughly 13% of total surface GW, affecting 60% of the population, underscoring the mitigation potential of urban Earth Brightening. New results here also show major Space Sunshading area reductions, at ≈32× less than prior flawed estimates (detailed here) and ≈1600× less under the ASG+P method, substantially improving feasibility and the importance of space agencies’ needed mitigation role. A coordinated global ASG+P strategy, supported by IPCC working groups and space agencies like NASA/SpaceX, are needed to provide a critical supplemental pathway for climate stabilization. Given the shrinking intervention window, rising MDCR, and the escalating risks to civilization, prioritizing timely work in this area is essential; the investment is minor compared to the trillions in climate financial damages that could be avoided. Full article
Show Figures

Figure 1

22 pages, 510 KB  
Review
Diagnostic Accuracy of Multiplex NAAT/PCR and Culture Against Salmonella spp.: A Comparison of Meta-Analytical Methods
by Xanthoula Rousou, Luis Furuya-Kanamori, Eleftherios Meletis, Olympia Lioupi, Nikolaos Solomakos, Polychronis Kostoulas and Suhail A. R. Doi
Pathogens 2026, 15(1), 45; https://doi.org/10.3390/pathogens15010045 - 31 Dec 2025
Viewed by 940
Abstract
Background: Non-typhoidal (NT) Salmonella spp. constitutes a major cause of foodborne illness. Culture is the gold standard, but it is time consuming, whereas multiplex nucleic acid amplification tests (NAATs)/Polymerase Chain Reaction (PCR) offer faster detection with variable reported performance. Objectives: To compare the [...] Read more.
Background: Non-typhoidal (NT) Salmonella spp. constitutes a major cause of foodborne illness. Culture is the gold standard, but it is time consuming, whereas multiplex nucleic acid amplification tests (NAATs)/Polymerase Chain Reaction (PCR) offer faster detection with variable reported performance. Objectives: To compare the diagnostic accuracy of multiplex NAAT/PCR and culture for Salmonella spp. using various statistical models with or without a gold standard assumption. Methods: A systematic search (PubMed, Web of Science, Scopus; up to April 2024) identified 44 studies (55 comparisons). Diagnostic performance was evaluated using the frequentists bivariate model (BM) and Split Component Synthesis (SCS) and the Bayesian bivariate models (BBMs) and hierarchical summary ROC (BHSROC). Results: Across models, multiplex NAAT/PCR demonstrated high specificity (>98%) but model-dependent variability in sensitivity (85.5–94.8%), consistently substantial between study heterogeneity and threshold variation. The BM and BBM yielded a higher sensitivity estimate with narrower non-overlapping confidence intervals while SCS and BHSROC models, which are more robust to threshold differences, produced more conservative estimates with wider uncertainty. In Bayesian latent class analyses, culture remained highly accurate (Se: 97.17%, 95% CrI: 70.3–99.99; Sp: 96.06%, 95% CrI: 78.9–99.99), but with wide credible intervals indicating variation between studies, perhaps due to the different protocols used. Conclusion: Model choice affects inferred diagnostic accuracy, particularly when high heterogeneity is present. Both multiplex NAAT/PCR and culture showed high accuracy; hence, a combination of the two tests could optimise rapid diagnosis and treatment. Future research should include cost effectiveness and decision analysis to update the diagnostic algorithms. Full article
(This article belongs to the Special Issue Diagnosis, Immunopathogenesis and Control of Bacterial Infections)
Show Figures

Figure 1

11 pages, 318 KB  
Article
Neonatal Screening for Congenital Adrenal Hyperplasia in Guangzhou: 7 Years of Experience
by Xuefang Jia, Ting Xie, Xiang Jiang, Fang Tang, Minyi Tan, Qianyu Chen, Sichi Liu, Yonglan Huang and Li Tao
Int. J. Neonatal Screen. 2025, 11(4), 116; https://doi.org/10.3390/ijns11040116 - 17 Dec 2025
Viewed by 900
Abstract
This study was designed to assess the effectiveness of neonatal congenital adrenal hyperplasia (CAH) screening in Guangzhou, China. A total of 818,417 newborns were screened for CAH by measuring 17-hydroxyprogesterone (17-OHP) concentrations. Cut-off values were stratified based on gestational age (GA) and the [...] Read more.
This study was designed to assess the effectiveness of neonatal congenital adrenal hyperplasia (CAH) screening in Guangzhou, China. A total of 818,417 newborns were screened for CAH by measuring 17-hydroxyprogesterone (17-OHP) concentrations. Cut-off values were stratified based on gestational age (GA) and the timing of sample collection. Neonates with initial positive results (17-OHP ≥ cut-off value) were recalled for a second dried blood spot sample to reassess 17-OHP levels. Confirmatory testing involved biochemical analyses, Sanger sequencing, and multiplex ligation-dependent probe amplification of the CYP21A2 gene. From 2018 to 2024, a total of 40 patients with classical 21-hydroxylase deficiency were identified, including 28 cases (70%) of the salt-wasting form and 12 cases (30%) of the simple virilizing form. The overall incidence of CAH was 1 in 20,653 (95% confidence interval: 1:34,928, 1:14,661). No statistically significant differences in prevalence were observed between sexes or between preterm and full-term infants (p > 0.05). 17-OHP concentrations are influenced by GA and the timing of sample collection. The screening efficiency for CAH could be improved by adopting a multitiered cut-off value system adjusted for GA and collection time. Full article
Show Figures

Figure 1

34 pages, 4065 KB  
Article
The Virality of TikTok and New Media in Disrupting and Overturning the Election Cancellation Paradigm in Romania
by Andreea Nistor and Eduard Zadobrischi
Adm. Sci. 2025, 15(11), 448; https://doi.org/10.3390/admsci15110448 - 17 Nov 2025
Viewed by 3521
Abstract
This study uses natural language processing (NLP) techniques to analyze the political discourse of the surprise presidential candidate, focusing on linguistic patterns, sentiment distribution, and recurring themes. This study addresses the problem of how TikTok virality and algorithmic amplification mechanisms can influence electoral [...] Read more.
This study uses natural language processing (NLP) techniques to analyze the political discourse of the surprise presidential candidate, focusing on linguistic patterns, sentiment distribution, and recurring themes. This study addresses the problem of how TikTok virality and algorithmic amplification mechanisms can influence electoral outcomes in Romania, analyzing whether heuristic boosting strategies can distort traditional political paradigms. The text corpus included over 3915 words extracted from the candidate’s speeches, with the most frequent terms being “sovereignty” (271 occurrences), “democracy” (164 occurrences), and “freedom” (80 occurrences). The analysis revealed that 57.8% of the content was neutral, 10% conveyed positive sentiment, and negative sentiment was absent. A word frequency analysis highlighted the candidate’s strategic emphasis on concepts related to national identity and participatory democracy. Sentiment analysis revealed an intentional use of neutral language to maintain balance, with occasional positive terms maintaining confidence and optimism among voters. Full article
Show Figures

Figure 1

20 pages, 1837 KB  
Article
Unlabeled Insight, Labeled Boost: Contrastive Learning and Class-Adaptive Pseudo-Labeling for Semi-Supervised Medical Image Classification
by Jing Yang, Mingliang Chen, Qinhao Jia and Shuxian Liu
Entropy 2025, 27(10), 1015; https://doi.org/10.3390/e27101015 - 27 Sep 2025
Cited by 1 | Viewed by 1231
Abstract
The medical imaging domain frequently encounters the dual challenges of annotation scarcity and class imbalance. A critical issue lies in effectively extracting information from limited labeled data while mitigating the dominance of head classes. The existing approaches often overlook in-depth modeling of sample [...] Read more.
The medical imaging domain frequently encounters the dual challenges of annotation scarcity and class imbalance. A critical issue lies in effectively extracting information from limited labeled data while mitigating the dominance of head classes. The existing approaches often overlook in-depth modeling of sample relationships in low-dimensional spaces, while rigid or suboptimal dynamic thresholding strategies in pseudo-label generation are susceptible to noisy label interference, leading to cumulative bias amplification during the early training phases. To address these issues, we propose a semi-supervised medical image classification framework combining labeled data-contrastive learning with class-adaptive pseudo-labeling (CLCP-MT), comprising two key components: the semantic discrimination enhancement (SDE) module and the class-adaptive pseudo-label refinement (CAPR) module. The former incorporates supervised contrastive learning on limited labeled data to fully exploit discriminative information in latent structural spaces, thereby significantly amplifying the value of sparse annotations. The latter dynamically calibrates pseudo-label confidence thresholds according to real-time learning progress across different classes, effectively reducing head-class dominance while enhancing tail-class recognition performance. These synergistic modules collectively achieve breakthroughs in both information utilization efficiency and model robustness, demonstrating superior performance in class-imbalanced scenarios. Extensive experiments on the ISIC2018 skin lesion dataset and Chest X-ray14 thoracic disease dataset validate CLCP-MT’s efficacy. With only 20% labeled and 80% unlabeled data, our framework achieves a 10.38% F1-score improvement on ISIC2018 and a 2.64% AUC increase on Chest X-ray14 compared to the baselines, confirming its effectiveness and superiority under annotation-deficient and class-imbalanced conditions. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 394 KB  
Article
Boosting Clean-Label Backdoor Attacks on Graph Classification
by Yadong Wang, Zhiwei Zhang, Ye Yuan and Guoren Wang
Electronics 2025, 14(18), 3632; https://doi.org/10.3390/electronics14183632 - 13 Sep 2025
Viewed by 1470
Abstract
Graph Neural Networks (GNNs) have become a cornerstone for graph classification, yet their vulnerability to backdoor attacks remains a significant security concern. While clean-label attacks provide a stealthier approach by preserving original labels, they tend to be less effective in graph settings compared [...] Read more.
Graph Neural Networks (GNNs) have become a cornerstone for graph classification, yet their vulnerability to backdoor attacks remains a significant security concern. While clean-label attacks provide a stealthier approach by preserving original labels, they tend to be less effective in graph settings compared to traditional dirty-label methods. This performance gap arises from the inherent dominance of rich, benign structural patterns in target-class graphs, which overshadow the injected backdoor trigger during the GNNs’ learning process. We demonstrate that prior strategies, such as adversarial perturbations used in other domains to suppress benign features, fail in graph settings due to the amplification effects of the GNNs’ message-passing mechanism. To address this issue, we propose two strategies aimed at enabling the model to better learn backdoor features. First, we introduce a long-distance trigger injection method, placing trigger nodes at topologically distant locations. This enhances the global propagation of the backdoor signal while interfering with the aggregation of native substructures. Second, we propose a vulnerability-aware sample selection method, which identifies graphs that contribute more to the success of the backdoor attack based on low model confidence or frequent forgetting events. We conduct extensive experiments on benchmark datasets such as NCI1, NCI109, Mutagenicity, and ENZYMES, demonstrating that our approach significantly improves attack success rates (ASRs) while maintaining a low clean accuracy drop (CAD) compared to existing methods. This work offers valuable insights into manipulating the competition between benign and backdoor features in graph-structured data. Full article
(This article belongs to the Special Issue Security and Privacy for AI)
Show Figures

Figure 1

9 pages, 800 KB  
Article
Rapid Detection Assay for Infectious Bronchitis Virus Using Real-Time Reverse Transcription Recombinase-Aided Amplification
by Nahed Yehia, Ahmed Abd El Wahed, Abdelsatar Arafa, Dalia Said, Ahmed Abd Elhalem Mohamed, Samah Eid, Mohamed Abdelhameed Shalaby, Rea Maja Kobialka, Uwe Truyen and Arianna Ceruti
Viruses 2025, 17(9), 1172; https://doi.org/10.3390/v17091172 - 27 Aug 2025
Cited by 1 | Viewed by 1791 | Correction
Abstract
The infectious bronchitis virus (IBV) causes a severe infectious disease in poultry, leading to significant financial losses. The prevention and treatment of this disease are extremely challenging due to the virus’s rapid mutation. Therefore, quick diagnosis of IBV infections is crucial for controlling [...] Read more.
The infectious bronchitis virus (IBV) causes a severe infectious disease in poultry, leading to significant financial losses. The prevention and treatment of this disease are extremely challenging due to the virus’s rapid mutation. Therefore, quick diagnosis of IBV infections is crucial for controlling the disease. This study aimed to develop a real-time reverse transcription recombinase-aided amplification (RT-RAA) method for IBV. The most effective primer combination was selected for further validation. To determine the assay’s analytical sensitivity, a serial dilution from 105 to 100 EID50/mL was used, and the limit of detection was calculated. The assay could detect down to 102 EID50/mL. The limit of detection (95% Confidence Interval) was 67 EID50 per reaction. There was no cross-reaction with common poultry diseases. When analyzing 39 clinical samples, RT-RAA and RT-PCR showed 100% diagnostic sensitivity and specificity. In conclusion, the IBV RT-RAA detection method is rapid, sensitive, and specific. This approach can be used to improve IBV diagnosis at the point of need. Full article
(This article belongs to the Section Animal Viruses)
Show Figures

Figure 1

26 pages, 3160 KB  
Article
When Two-Fold Is Not Enough: Quantifying Uncertainty in Low-Copy qPCR
by Stephen A. Bustin, Sara Kirvell, Tania Nolan, Reinhold Mueller and Gregory L. Shipley
Int. J. Mol. Sci. 2025, 26(16), 7796; https://doi.org/10.3390/ijms26167796 - 12 Aug 2025
Cited by 3 | Viewed by 3295
Abstract
Accurate interpretation of qPCR data continues to present significant challenges, particularly at low target concentrations where technical variability, stochastic amplification, and efficiency fluctuations confound quantification. The widespread assumption that qPCR outputs are intrinsically reliable, coupled with inconsistent adherence to best-practice guidelines, has exacerbated [...] Read more.
Accurate interpretation of qPCR data continues to present significant challenges, particularly at low target concentrations where technical variability, stochastic amplification, and efficiency fluctuations confound quantification. The widespread assumption that qPCR outputs are intrinsically reliable, coupled with inconsistent adherence to best-practice guidelines, has exacerbated issues of reproducibility and contributed to misleading conclusions. This may distort pathogen load quantification in diagnostic settings, whilst in gene expression studies, it can lead to overinterpretation of small fold changes. This study presents a systematic, cross-platform evaluation of qPCR performance across a wide dynamic range using defined reaction mixes and technical replicates. We show that calculated copy numbers can closely match expected values over more than three orders of magnitude, but that variability increases markedly at low input concentrations, often exceeding the magnitude of biologically meaningful differences. We conclude that establishing and reporting confidence intervals from the data itself is essential for transparency and for distinguishing reliable quantification from technical noise. Full article
Show Figures

Figure 1

21 pages, 10439 KB  
Article
Camera-Based Vital Sign Estimation Techniques and Mobile App Development
by Tae Wuk Bae, Young Choon Kim, In Ho Sohng and Kee Koo Kwon
Appl. Sci. 2025, 15(15), 8509; https://doi.org/10.3390/app15158509 - 31 Jul 2025
Viewed by 3536
Abstract
In this paper, we propose noncontact heart rate (HR), oxygen saturation (SpO2), and respiratory rate (RR) detection methods using a smartphone camera. HR frequency is detected through filtering after obtaining a remote PPG (rPPG) signal and its power spectral density (PSD) is detected [...] Read more.
In this paper, we propose noncontact heart rate (HR), oxygen saturation (SpO2), and respiratory rate (RR) detection methods using a smartphone camera. HR frequency is detected through filtering after obtaining a remote PPG (rPPG) signal and its power spectral density (PSD) is detected using color difference signal amplification and the plane-orthogonal-to-the-skin method. Additionally, the SpO2 is detected using the HR frequency and the absorption ratio of the G and B color channels based on oxyhemoglobin absorption and reflectance theory. After this, the respiratory frequency is detected using the PSD of rPPG through respiratory frequency band filtering. For the image sequences recorded under various imaging conditions, the proposed method demonstrated superior HR detection accuracy compared to existing methods. The confidence intervals for HR and SpO2 detection were analyzed using Bland–Altman plots. Furthermore, the proposed RR detection method was also verified to be reliable. Full article
Show Figures

Figure 1

Back to TopTop