Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (395)

Search Parameters:
Keywords = quantification threshold

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1584 KB  
Article
Spectral Precision: The Added Value of Dual-Energy CT for Axillary Lymph Node Characterization in Breast Cancer
by Susanna Guerrini, Giulio Bagnacci, Paola Morrone, Cecilia Zampieri, Chiara Esposito, Iacopo Capitoni, Nunzia Di Meglio, Armando Perrella, Francesco Gentili, Alessandro Neri, Donato Casella and Maria Antonietta Mazzei
Cancers 2026, 18(3), 363; https://doi.org/10.3390/cancers18030363 - 23 Jan 2026
Viewed by 195
Abstract
Background/Objectives: To develop and validate a predictive model that combines morphological features and dual-energy CT (DECT) parameters to non-invasively distinguish metastatic from benign axillary lymph nodes in patients with breast cancer (BC). Methods: In this retrospective study, 117 patients (median age, [...] Read more.
Background/Objectives: To develop and validate a predictive model that combines morphological features and dual-energy CT (DECT) parameters to non-invasively distinguish metastatic from benign axillary lymph nodes in patients with breast cancer (BC). Methods: In this retrospective study, 117 patients (median age, 65 years; 111 women and 6 men) who underwent DECT followed by axillary lymphadenectomy between April 2015 and July 2023, were analyzed. A total of 375 lymph nodes (180 metastatic, 195 benign) were evaluated. Two radiologists recorded morphological criteria (adipose hilum status, cortical appearance, extranodal extension, and short-axis diameter) and placed regions of interest to measure dual-energy parameters: attenuation at 40 and 70 keV, iodine concentration, water concentration and spectral slope. Normalized iodine concentration was calculated using the aorta as reference. Univariate analysis identified variables associated with metastasis. Multivariate logistic regression with cross-validation was used to construct two models: one based solely on morphological features and one integrating water concentration. Results: On univariate testing, all DECT parameters and morphological criteria differed significantly between metastatic and benign nodes (p < 0.01). In multivariate analysis, water concentration emerged as the only independent DECT predictor (odds ratio = 0.97; p = 0.002) alongside cortical abnormality, absence of adipose hilum, extranodal extension and short-axis diameter. The morphologic model achieved an area under the receiver operating characteristic curve (AUC) of 0.871. Increasing water concentration increased the AUC to 0.883 (ΔAUC = 0.012; p = 0.63, not significant), with internal cross-validation confirming stable performance. Conclusions: A model combining standard morphologic criteria with water concentration quantification on DECT accurately differentiates metastatic from benign axillary nodes in BC patients. Although iodine-based metrics remain valuable indicators of perfusion, water concentration offers additional tissue composition information. Future multicenter prospective studies with standardized imaging protocols are warranted to refine parameter thresholds and validate this approach for routine clinical use. Full article
Show Figures

Figure 1

20 pages, 592 KB  
Review
Detection of Feigned Impairment of the Shoulder Due to External Incentives: A Comprehensive Review
by Nahum Rosenberg
Diagnostics 2026, 16(2), 364; https://doi.org/10.3390/diagnostics16020364 - 22 Jan 2026
Viewed by 356
Abstract
Background: Feigned restriction of shoulder joint movement for secondary gain is clinically relevant and may misdirect care, distort disability determinations, and inflate system costs. Distinguishing feigning from structural pathology and from functional or psychosocial presentations is difficult because pain is subjective, performance varies, [...] Read more.
Background: Feigned restriction of shoulder joint movement for secondary gain is clinically relevant and may misdirect care, distort disability determinations, and inflate system costs. Distinguishing feigning from structural pathology and from functional or psychosocial presentations is difficult because pain is subjective, performance varies, and no single sign or test is definitive. This comprehensive review hypothesizes that the systematic integration of clinical examination, objective biomechanical and neurophysiological testing, and emerging technologies can substantially improve detection accuracy and provide defensible medicolegal documentation. Methods: PubMed and reference lists were searched within a prespecified time frame (primarily 2015–2025, with foundational earlier works included when conceptually essential) using terms related to shoulder movement restriction, malingering/feigning, symptom validity, effort testing, functional assessment, and secondary gain. Evidence was synthesized narratively, emphasizing objective or semi-objective quantification of motion and effort (goniometry, dynamometry, electrodiagnostics, kinematic sensing, and imaging). Results: Detection is best approached as a stepwise, multidimensional evaluation. First-line clinical assessment focuses on reproducible incongruence: non-anatomic patterns, internal inconsistencies, distraction-related improvement, and mismatch between claimed disability and observed function. Repeated examinations and documentation strengthen inference. Instrumented strength testing improves quantification beyond manual testing but remains effort-dependent; repeat-trial variability and atypical agonist–antagonist co-activation can indicate submaximal performance without proving intent. Imaging primarily tests plausibility by confirming lesions or highlighting discordance between claimed limitation and minimal pathology, while recognizing that normal imaging does not exclude pain. Diagnostic anesthetic injections and electrodiagnostics can clarify pain-mediated restriction or exclude neuropathic weakness but require cautious interpretation. Motion capture and inertial sensors can document compensatory strategies and context-dependent normalization, yet validated standalone thresholds are limited. Conclusions: Feigned shoulder impairment cannot be confirmed by any single test. The desirable strategy combines structured assessment of inconsistencies with objective biomechanical and neurophysiologic measurements, interpreted within the whole clinical context and rigorously documented; however, prospective validation is still needed before routine implementation. Full article
Show Figures

Figure 1

21 pages, 4697 KB  
Article
High-Throughput, Quantitative Detection of Pseudoperonospora cubensis Sporangia in Cucumber by Flow Cytometry: A Tool for Early Disease Diagnosis
by Baoyu Hao, Siming Chen, Weiwen Qiu, Kaige Liu, Antonio Cerveró Domenech, Juan Antonio Benavente Fernandez, Jian Shen, Ming Li and Xinting Yang
Agronomy 2026, 16(2), 205; https://doi.org/10.3390/agronomy16020205 - 14 Jan 2026
Viewed by 262
Abstract
Cucumber downy mildew, caused by the obligate parasitic oomycete Pseudoperonospora cubensis [(Berkeley & M. A. Curtis) Rostovzev], is a major threat to global cucumber production. Effective disease management relies on rapid and accurate pathogen detection. However, due to the specialized parasitic nature of [...] Read more.
Cucumber downy mildew, caused by the obligate parasitic oomycete Pseudoperonospora cubensis [(Berkeley & M. A. Curtis) Rostovzev], is a major threat to global cucumber production. Effective disease management relies on rapid and accurate pathogen detection. However, due to the specialized parasitic nature of P. cubensis, conventional methods are often laborious, low-throughput and inadequate, necessitating the development of a new approach for high-throughput sporangia counting. To address this limitation, we developed a rapid, high-throughput flow cytometry (FCM) assay for the direct quantification of P. cubensis sporangia. The optimal staining protocol involved adding 30 µL of 1000× diluted SYBR Green I to 500 µL of sporangial suspension and incubating at room temperature for 20 min. The flow cytometry parameters were set to a high sample loading speed with a 30-s acquisition time. Instrumental settings included an FL1 (green fluorescence) threshold of 8 × 104 and an SSC (side scatter) threshold of 3 × 105, with low gain. Validation against hemocytometer counts revealed a strong positive correlation (r = 0.8352). The assay demonstrated high reproducibility, with relative standard deviations (RSDs) ranging from 1.96–9.84%, and a detection limit of 1–10 sporangia/µL. Operator-dependent variability ranged from 8.85% to 18.79%. These results confirm that the established flow cytometry assay is a reliable and efficient tool for P. cubensis quantification, offering considerable potential for improving cucumber downy mildew monitoring and control strategies. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

28 pages, 9311 KB  
Article
Modeling Reliability Quantification of Water-Level Thresholds for Flood Early Warning
by Shiang-Jen Wu, Hao-Wen Yang, Sheng-Hsueh Yang and Keh-Chia Yeh
Hydrology 2026, 13(1), 30; https://doi.org/10.3390/hydrology13010030 - 14 Jan 2026
Viewed by 200
Abstract
This study proposes a framework, the RA_WLTE_River model, for quantifying the reliability of flood-altering water-level thresholds, considering rainfall and runoff-related uncertainties. The Keelung River in northern Taiwan is selected as the study area, and associated hydrological data from 2008 to 2016 are applied [...] Read more.
This study proposes a framework, the RA_WLTE_River model, for quantifying the reliability of flood-altering water-level thresholds, considering rainfall and runoff-related uncertainties. The Keelung River in northern Taiwan is selected as the study area, and associated hydrological data from 2008 to 2016 are applied in the development and application of the model. According to the results from the model development and demonstration, the average and maximum rainfall intensities, roughness coefficients, and maximum tide depths exhibit a significant contribution to the reliability quantification of the estimated water-level thresholds. In addition, empirically based water-level thresholds can achieve the goal of rainfall-induced flood early warning, with a high likelihood of nearly 0.95. Additionally, the probabilistically based water-level thresholds derived from the described reliability can efficiently ensure consistent flood early warning performance at all control points along the river. Full article
(This article belongs to the Section Statistical Hydrology)
Show Figures

Figure 1

26 pages, 1406 KB  
Article
The Welfare Impact of Heat Stress in South American Beef Cattle and the Cost-Effectiveness of Shade Provision
by Cynthia Schuck-Paim, Wladimir Jimenez Alonso, Anielly de Paula Freitas, Camila Pereira de Oliveira, Vinicius de França Carvalho Fonseca and Tâmara Duarte Borges
Animals 2026, 16(2), 231; https://doi.org/10.3390/ani16020231 - 13 Jan 2026
Viewed by 291
Abstract
Heat stress represents a pervasive welfare challenge for beef cattle and other species in tropical and subtropical regions. While its physiological and production impacts are well-documented, quantitative measures of the welfare impact of heat stress remain absent. This study provides the first quantification [...] Read more.
Heat stress represents a pervasive welfare challenge for beef cattle and other species in tropical and subtropical regions. While its physiological and production impacts are well-documented, quantitative measures of the welfare impact of heat stress remain absent. This study provides the first quantification of the welfare impact of heat stress in beef cattle (mostly Nelore), estimated as cumulative time in thermal discomfort of four intensities (Annoying, Hurtful, Disabling, Excruciating) using the Welfare Footprint Framework. We analyzed climate data from 636 locations over five years across major beef production areas in Brazil, Argentina, Colombia, Paraguay, and Uruguay. Daily heat stress episodes and chronic heat stress exposure were assessed, respectively, using Comprehensive Climate Index (CCI) levels and the Annual Thermal Load metric, which sums daily excesses above a threshold of thermal comfort (CCI = 30 °C) throughout the year, classifying locations into five risk categories. Welfare impacts were estimated for thirteen heat stress scenarios modeled by considering each CCI level within each thermal risk category. Beef cattle in moderate-risk regions were estimated to experience primarily mild thermal discomfort for an average of 5 h daily. This duration increased to an average of 7 h daily in high-risk areas, of which 4.5 h in moderate to intense thermal discomfort (Hurtful or higher). Very high-risk regions reached 10 h of daily thermal discomfort, while extreme-risk regions showed beef cattle facing heat stress for over 11 h on 307 days annually, including over 3 h per day under severe thermoregulatory effort. Overall, 65% of animals were in regions of high thermal risk or above, experiencing between 280 and 2800 h annually in moderate to intense thermal discomfort—a magnitude that places heat stress among the most significant welfare challenges in animal production. Shade provision reduced time in severe discomfort of Disabling intensity by 85% (from 578 to 83 h annually), with economic returns of US$12–16 per animal and payback periods of approximately 16 months. By quantifying welfare impacts as cumulative time in thermal discomfort, shade provision emerges as one of the most effective welfare interventions available for beef cattle, and likely other grazing ruminants, in tropical and subtropical regions. Full article
(This article belongs to the Section Animal Welfare)
Show Figures

Figure 1

25 pages, 5863 KB  
Systematic Review
AI-Enhanced CBCT for Quantifying Orthodontic Root Resorption: Evidence from a Systematic Review and a Clinical Case of Severe Bilateral Canine Impaction
by Teresa Pinho, Letícia Costa and João Pedro Carvalho
Appl. Sci. 2026, 16(2), 771; https://doi.org/10.3390/app16020771 - 12 Jan 2026
Viewed by 263
Abstract
Background: Artificial intelligence (AI) integrated with cone-beam computed tomography (CBCT) has rapidly advanced the diagnostic capability of orthodontics, particularly for quantifying external root resorption (ERR). High-risk scenarios such as bilateral maxillary canine impaction require objective tools to guide treatment decisions and prevent irreversible [...] Read more.
Background: Artificial intelligence (AI) integrated with cone-beam computed tomography (CBCT) has rapidly advanced the diagnostic capability of orthodontics, particularly for quantifying external root resorption (ERR). High-risk scenarios such as bilateral maxillary canine impaction require objective tools to guide treatment decisions and prevent irreversible damage. Objectives: To evaluate the diagnostic accuracy and clinical applicability of AI-assisted CBCT for orthodontically induced ERR, and to demonstrate its value in a complex clinical case where decision-making regarding canine traction versus extraction required precise risk quantification and definition of biological limits. Methods: A systematic review following PRISMA 2020 guidelines was conducted in PubMed, ScienceDirect, and Cochrane Library (2015–September 2025). Eligible studies applied AI-enhanced CBCT to assess ERR in orthodontic patients. Additionally, a clinical case with bilaterally impacted maxillary canines was evaluated using CBCT with automated AI segmentation and manual refinement to quantify root volume changes and determine prognostic thresholds for treatment modification. Results: Nine studies met the inclusion criteria. AI-based imaging, predominantly convolutional neural networks, showed high diagnostic accuracy (up to 94%), improving reproducibility and reducing operator dependency. In the clinical case, volumetric monitoring showed rapid progression of ERR in the lateral incisors (LI) associated with a persistent unfavorable 3D spatial relationship between the canines and incisor roots, despite controlled distal traction with skeletal anchorage, leading to a timely change in the treatment plan and extraction of the severely compromised incisors with substitution by the canines. AI-generated data provided objective evidence supporting safer decision-making and prevented further structural deterioration. Conclusions: AI-enhanced CBCT enables early, objective, and quantifiable ERR assessment, strengthening prognosis-based decisions in orthodontics. Findings of this review and the clinical case highlight the translational relevance of AI for managing high-risk cases, such as maxillary canine impaction with extensive LI resorption, supporting future predictive AI models for safer canine traction. Full article
(This article belongs to the Special Issue Advancements and Updates in Digital Dentistry)
Show Figures

Figure 1

30 pages, 4414 KB  
Article
Model Averaging and Grid Maps for Modeling Heavy-Tailed Insurance Data
by Lira B. Mothibe and Sandile C. Shongwe
Risks 2026, 14(1), 11; https://doi.org/10.3390/risks14010011 - 5 Jan 2026
Viewed by 345
Abstract
This work presents a practical approach to improve risk quantification for heavy-tailed insurance claims through model averaging and grid map visualization, addressing the drawbacks of traditional single “best” model selection commonly used in actuarial and model-fitting literature. This is a data-driven study with [...] Read more.
This work presents a practical approach to improve risk quantification for heavy-tailed insurance claims through model averaging and grid map visualization, addressing the drawbacks of traditional single “best” model selection commonly used in actuarial and model-fitting literature. This is a data-driven study with a focus on Danish fire loss data, where the following are fitted: (i) 16 standard single distributions, (ii) 256 composite distributions, and (iii) 256 mixture distributions; wherein, for the composite and mixture distributions, we focus on the top 20 leading models in terms of the information criterion (i.e., Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC)). Model selection uncertainty is explicitly addressed by AIC and BIC weighted averaging within the Occam’s window (relying on weighted point estimates), while grid maps simultaneously plot information criteria against risk measures, specifically the Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR) at 95% and 99% thresholds, to highlight critical-fit versus tail-risk trade-offs. It is observed that the model-averaged risk measures from composite models align more closely with the empirical values. That is, model-averaged estimates across all categories align closely with empirical VaR0.95 but conservatively elevate TVaR0.99, promoting safer capital reserves. Grid maps and model averaging confirm that mixture and composite models better capture the heavy-tailed nature of Danish fire claims data as compared to fitting a single distribution. Full article
(This article belongs to the Special Issue Statistical Models for Insurance)
Show Figures

Figure 1

16 pages, 4374 KB  
Article
Development and Laboratory Validation of a Real-Time Quantitative PCR Assay for Rapid Detection and Quantification of Heterocapsa bohaiensis
by Mengfan Cai, Ruijia Jing, Yiwen Zhang and Jingjing Zhan
J. Mar. Sci. Eng. 2026, 14(1), 98; https://doi.org/10.3390/jmse14010098 - 4 Jan 2026
Viewed by 233
Abstract
Heterocapsa bohaiensis is an emerging harmful dinoflagellate increasingly reported from coastal regions of the Pacific. However, an available molecular assay offering rapid and sensitive detection is still lacking. This study developed a SYBR Green real-time quantitative PCR (qPCR) assay for the identification and [...] Read more.
Heterocapsa bohaiensis is an emerging harmful dinoflagellate increasingly reported from coastal regions of the Pacific. However, an available molecular assay offering rapid and sensitive detection is still lacking. This study developed a SYBR Green real-time quantitative PCR (qPCR) assay for the identification and quantification of H. bohaiensis. Species-specific primers (F: 5′-CCATCGAACCAGAACTCCGT-3′; R: 5′-AGTGTAGTGCACCGCATGTC-3′) were designed and the assay was optimized and evaluated using laboratory cultures for specificity, sensitivity, and quantitative performance. Primer screening and melt-curve analysis confirmed that the selected primer pair produced a single, specific amplification peak for H. bohaiensis, with no cross-reactivity observed in non-target species (Chlorella pyrenoidosa, Phaeocystis globosa, Skeletonema costatum, Alexandrium tamarense) or mixed algal communities. The standard curve displayed strong linearity (R2 = 0.9868) and a high amplification efficiency (102.5%). The limit of detection (LOD) was approximately 2–3 cells per reaction, as determined from 24 replicates of 5-cell equivalents and verified at ~2.7-cell equivalents. This sensitivity was comparable to or exceeded that reported for assays targeting other HABs forming dinoflagellates. Quantitative results derived from the qPCR assay closely matched microscopic cell counts, with a relative error of 10.79%, falling within the acceptable threshold for phytoplankton surveys. In summary, this study established and validates a species-specific qPCR assay for H. bohaiensis under controlled laboratory conditions. The method shows strong potential for incorporation into HAB monitoring programs, early-warning systems, and future ecological investigations of this emerging species. Full article
Show Figures

Figure 1

19 pages, 1496 KB  
Article
An Evidence-Based Framework for the Sustainable Rehabilitation of Corrosion-Damaged Historic Marine Structures
by Tamim A. Samman and Ahmed Gouda
Corros. Mater. Degrad. 2026, 7(1), 4; https://doi.org/10.3390/cmd7010004 - 29 Dec 2025
Viewed by 341
Abstract
This paper presents a validated, data-driven framework for the sustainable rehabilitation of corrosion-damaged marine infrastructure, demonstrated through a comprehensive study on a historic coastal structure. The implemented three-phase methodology—integrating advanced condition assessment, evidence-based intervention design, and rigorous performance validation—successfully addressed severe chloride-induced deterioration. [...] Read more.
This paper presents a validated, data-driven framework for the sustainable rehabilitation of corrosion-damaged marine infrastructure, demonstrated through a comprehensive study on a historic coastal structure. The implemented three-phase methodology—integrating advanced condition assessment, evidence-based intervention design, and rigorous performance validation—successfully addressed severe chloride-induced deterioration. Diagnostic quantification revealed that 30% of the primary substructure was severely compromised, with chloride concentrations reaching 1.94% by weight (970% above the corrosion threshold) and half-cell potential mapping confirming a >90% probability of active corrosion in critical elements. Guided by this data, a synergistic intervention combining galvanic cathodic protection, high-performance coatings, and structural strengthening was deployed. Post-repair validation confirmed exceptional outcomes: a complete electrochemical repassivation (potential shift from −385 mV to −185 mV), a 97.3% reduction in chloride diffusion rates, a 250% increase in surface resistivity, and the restoration of structural capacity to 115% of design specifications. The framework achieved a 65% reduction in projected lifecycle costs while establishing a new paradigm for preserving marine infrastructure through evidence-based, multi-mechanism strategies that ensure long-term durability and economic viability. Full article
Show Figures

Figure 1

15 pages, 8775 KB  
Article
Assessing Change in Stone Burden on Baseline and Follow-Up CT: Radiologist and Radiomics Evaluations
by Parisa Kaviani, Matthias F. Froelich, Bernardo Bizzo, Andrew Primak, Giridhar Dasegowda, Emiliano Garza-Frias, Lina Karout, Anushree Burade, Seyedehelaheh Hosseini, Javier Eduardo Contreras Yametti, Keith Dreyer, Sanjay Saini and Mannudeep Kalra
J. Imaging 2026, 12(1), 13; https://doi.org/10.3390/jimaging12010013 - 27 Dec 2025
Viewed by 418
Abstract
This retrospective diagnostic accuracy study compared radiologist-based qualitative assessments and radiomics-based analyses with an automated artificial intelligence (AI)–based volumetric approach for evaluating changes in kidney stone burden on follow-up CT examinations. With institutional review board approval, 157 patients (mean age, 61 ± 13 [...] Read more.
This retrospective diagnostic accuracy study compared radiologist-based qualitative assessments and radiomics-based analyses with an automated artificial intelligence (AI)–based volumetric approach for evaluating changes in kidney stone burden on follow-up CT examinations. With institutional review board approval, 157 patients (mean age, 61 ± 13 years; 99 men, 58 women) who underwent baseline and follow-up non-contrast abdomen–pelvis CT for kidney stone evaluation were included. The index test was an automated AI-based whole-kidney and stone segmentation radiomics prototype (Frontier, Siemens Healthineers), which segmented both kidneys and isolated stone volumes using a fixed threshold of 130 Hounsfield units, providing stone volume and maximum diameter per kidney. The reference standard was a threshold-defined volumetric assessment of stone burden change between baseline and follow-up CTs. The radiologist’s performance was assessed using (1) interpretations from clinical radiology reports and (2) an independent radiologist’s assessment of stone burden change (stable, increased, or decreased). Diagnostic accuracy was evaluated using multivariable logistic regression and receiver operating characteristic (ROC) analysis. Automated volumetric assessment identified stable (n = 44), increased (n = 109), and decreased (n = 108) stone burden across the evaluated kidneys. Qualitative assessments from radiology reports demonstrated weak diagnostic performance (AUC range, 0.55–0.62), similar to the independent radiologist (AUC range, 0.41–0.72) for differentiating changes in stone burden. A model incorporating higher-order radiomics features achieved an AUC of 0.71 for distinguishing increased versus decreased stone burdens compared with the baseline CT (p < 0.001), but did not outperform threshold-based volumetric assessment. The automated threshold-based volumetric quantification of kidney stone burdens provides higher diagnostic accuracy than qualitative radiologist assessments and radiomics-based analyses for identifying a stable, increased, or decreased stone burden on follow-up CT examinations. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

10 pages, 886 KB  
Article
Evaluation of Commercial Immunoassays for Rubella Virus IgG Detection in Low-Antibody Sera Using a Recombinant Immunoblot as a Reference Method
by Carmen Ortega, Antonio Sampedro-Padilla, Pablo Mazuelas, Jose Serrano, Ana Abreu, Juan Antonio Reguera, Javier Rodríguez-Granger, Fernando Cobo, Juan Francisco Gutiérrez-Bautista and Antonio Sampedro
Microorganisms 2026, 14(1), 58; https://doi.org/10.3390/microorganisms14010058 - 26 Dec 2025
Viewed by 381
Abstract
Rubella virus (RV) IgG quantification is essential for verifying immunity, particularly in prenatal care. However, substantial variability exists among commercial immunoassays, especially when testing low-antibody sera. In this study, we evaluated five commercial assays—four chemiluminescent immunoassays (CLIAs) and one Enzyme-linked Immunosorbent Assay (ELISA)—using [...] Read more.
Rubella virus (RV) IgG quantification is essential for verifying immunity, particularly in prenatal care. However, substantial variability exists among commercial immunoassays, especially when testing low-antibody sera. In this study, we evaluated five commercial assays—four chemiluminescent immunoassays (CLIAs) and one Enzyme-linked Immunosorbent Assay (ELISA)—using a recombinant immunoblot (IB) as the reference method. A panel of 137 serum samples with low or undetectable IgG levels was analyzed. Sensitivity ranged from 19.6% to 70.1%, while specificity exceeded 94%. Only 18.6% of immunoblot-positive samples tested positive across all assays. Marked quantitative differences were observed, with the Atellica assay yielding the highest titers and Alinity the lowest. Reclassifying equivocal results as positive improved concordance without compromising specificity. These findings suggest that current cut-off values, derived from post-infection sera, may be inadequate for vaccinated populations. A single universal threshold may lead to misclassification and underestimation of immunity. Harmonization of assay calibrations, antigenic targets, and interpretation criteria is urgently needed to ensure reliable rubella immunity assessments in clinical and public health settings. Full article
(This article belongs to the Section Medical Microbiology)
Show Figures

Figure 1

14 pages, 1691 KB  
Article
Evaluating Polymer Characterization Methods to Establish a Quantitative Method of Compositional Analysis Using a Polyvinyl Alcohol (PVA)/Polyethylene Glycol (PEG)—Based Hydrogel for Biomedical Applications
by Antonio G. Abbondandolo, Anthony Lowman and Erik C. Brewer
Polymers 2026, 18(1), 48; https://doi.org/10.3390/polym18010048 - 24 Dec 2025
Viewed by 557
Abstract
Multi-component polymer hydrogels present complex physiochemical interactions that make accurate compositional analysis challenging. This study evaluates three analytical techniques: Nuclear Magnetic Resonance (NMR), Advanced Polymer Chromatography (APC), and Thermogravimetric Analysis (TGA) to quantify polyvinyl alcohol (PVA) and polyethylene glycol (PEG) content in hybrid [...] Read more.
Multi-component polymer hydrogels present complex physiochemical interactions that make accurate compositional analysis challenging. This study evaluates three analytical techniques: Nuclear Magnetic Resonance (NMR), Advanced Polymer Chromatography (APC), and Thermogravimetric Analysis (TGA) to quantify polyvinyl alcohol (PVA) and polyethylene glycol (PEG) content in hybrid freeze-thaw derived PVA/PEG/PVP hydrogels. Hydrogels were synthesized using an adapted freeze–thaw method across a wide range of PVA:PEG ratios, with PVP included at 1 wt% to assess potential intermolecular effects. NMR and APC reliably quantified polymer content with low average errors of 2.77% and 2.01%, respectively, and were unaffected by phase separation or hydrogen bonding within the composite matrix. TGA enabled accurate quantification at PVA contents ≤ 62.5%, where PEG and PVA maintained distinct thermal decomposition behaviors. At higher PVA concentrations, increased hydrogen bonding and crystalline restructuring, confirmed by FTIR through shifts near 1140 cm−1 and significant changes in the -OH region, altered thermal profiles and reduced TGA accuracy. Together, these findings establish APC as a high-throughput alternative to NMR for multi-component polymer analysis and outline critical thermal and structural thresholds that influence TGA-based quantification. This work provides a framework for characterizing complex polymer networks in biomedical hydrogel systems. Full article
(This article belongs to the Section Polymer Analysis and Characterization)
Show Figures

Figure 1

23 pages, 11089 KB  
Article
Quantifying Broad-Leaved Korean Pine Forest Structure Using Terrestrial Laser Scanning (TLS), Changbai Mountain, China
by Jingcheng Luo, Qingda Chen, Zhichao Wu, Tian Gao, Li Zhou, Jiaojiao Deng, Yansong Zhang and Dapao Yu
Remote Sens. 2025, 17(24), 4049; https://doi.org/10.3390/rs17244049 - 17 Dec 2025
Viewed by 336
Abstract
Accurate assessment of stand structure is fundamental for elucidating the relationship between forest structure and ecological function, which is vital for enhancing forest quality and ecosystem services. This study, conducted in a 1 hm2 plot of old-growth broadleaved-Korean pine forest in Changbai [...] Read more.
Accurate assessment of stand structure is fundamental for elucidating the relationship between forest structure and ecological function, which is vital for enhancing forest quality and ecosystem services. This study, conducted in a 1 hm2 plot of old-growth broadleaved-Korean pine forest in Changbai Mountain, integrated Terrestrial Laser Scanning (TLS), precise geographic coordinates, Quantitative Structure Models (QSM), and wood density data. This methodology enabled a precise, non-destructive quantification of key structural parameters—DBH, tree height, crown overlap, stand volume, and carbon storage—and the development of species-specific allometric equations. The results demonstrated that TLS-derived DBH estimates were 99% accurate, consistent across diameter classes. The overall crown overlap rate (DBH ≥ 5 cm) was 59.1%, decreasing markedly to 26.7% and 19.2% at DBH thresholds of 20 cm and 30 cm, respectively. Allometric models based on DBH showed higher predictive accuracy for stem biomass than for branches, and for broadleaved species over conifers. Notably, conventional models overestimated stem biomass while underestimating branch biomass by 1.34–92.85%, highlighting biases from limited large-tree samples. The integrated TLS-QSM approach provides a robust alternative for accurate biomass estimation, establishing a critical foundation for large-scale, non-destructive allometric modeling. Its broader applicability, however, necessitates further validation across diverse forest ecosystems. Full article
Show Figures

Figure 1

18 pages, 5536 KB  
Article
Automated Particle Size Analysis of Supported Nanoparticle TEM Images Using a Pre-Trained SAM Model
by Xiukun Zhong, Guohong Liang, Lingbei Meng, Wei Xi, Lin Gu, Nana Tian, Yong Zhai, Yutong He, Yuqiong Huang, Fengmin Jin and Hong Gao
Nanomaterials 2025, 15(24), 1886; https://doi.org/10.3390/nano15241886 - 16 Dec 2025
Cited by 2 | Viewed by 732
Abstract
This study addresses the challenges associated with transmission electron microscopy (TEM) image analysis of supported nanoparticles, including low signal-to-noise ratio, poor contrast, and interference from complex substrate backgrounds. This study proposes an automated segmentation and particle size analysis method based on a large-scale [...] Read more.
This study addresses the challenges associated with transmission electron microscopy (TEM) image analysis of supported nanoparticles, including low signal-to-noise ratio, poor contrast, and interference from complex substrate backgrounds. This study proposes an automated segmentation and particle size analysis method based on a large-scale deep learning model, namely segment anything model (SAM). Using Ru/TiO2 and related materials as representative systems, the pretrained SAM is employed for zero-shot segmentation of nanoparticles, which is further integrated with a custom image processing pipeline, including optical character recognition (OCR) module, morphological optimization, and connected component analysis to achieve high-precision particle size quantification. Experimental results demonstrate that the method retains robust performance under challenging imaging conditions, with a size estimation error between 3% and 5% and a per-image processing time under 1 min, significantly outperforming traditional manual annotation and threshold-based segmentation approaches. This framework provides an efficient and reliable analytical tool for morphological characterization and structure–performance correlation studies in supported nanocatalysts. Full article
(This article belongs to the Section Theory and Simulation of Nanostructures)
Show Figures

Figure 1

18 pages, 4143 KB  
Article
Impact of Alcohol Content on Alcohol–Ester Interactions in Qingxiangxing Baijiu Through Threshold Analysis
by Huan Zhang, Liuyan Zheng, Kaixuan Zhu, Tianxu Liu, Lexuan Yang, Lijuan Ma, Xin Zhang, Lin Yuan and Liping Du
Foods 2025, 14(24), 4290; https://doi.org/10.3390/foods14244290 - 12 Dec 2025
Viewed by 720
Abstract
Alcohols and esters are core flavor-active constituents of Qingxiangxing Baijiu (QXB), yet ethanol concentration’s regulatory role in their thresholds and interactions remains unclear. Physicochemical analysis showed reduced-alcohol QXB (L-QX, 42%, v/v) had higher total acid (1.48 g/L) but lower total [...] Read more.
Alcohols and esters are core flavor-active constituents of Qingxiangxing Baijiu (QXB), yet ethanol concentration’s regulatory role in their thresholds and interactions remains unclear. Physicochemical analysis showed reduced-alcohol QXB (L-QX, 42%, v/v) had higher total acid (1.48 g/L) but lower total ester (1.52 g/L) than high-alcohol QXB (H-QX, 53%, v/v; 1.20 g/L total acid, 2.05 g/L total ester). Sensory evaluation (0–5 scale) revealed H-QX had higher fruity (3.6 vs. 2.0), grassy (3.2 vs. 1.8), and grainy (3.0 vs. 1.9) aroma scores, while L-QX showed higher sour (2.1 vs. 1.5) and lees (1.7 vs. 1.1) notes (p < 0.05). The quantification of gas chromatography-flame ionization detection (GC-FID) determined the concentrations of eight alcohols and esters in H-QX samples and identified that most flavor compounds had higher concentrations than L-QX samples. Three alternative forced-choice tests showed 53% ethanol elevated olfactory thresholds (OTs) of five compounds, with ethyl lactate (1.53-fold) and isopentanol (1.89-fold) vs. 42%. For 16 alcohol–ester binary mixtures, 12 pairs had OT ratios (53% vs. 42%) < 1, especially 3 pairs (e.g., n-propanol-ethyl acetate) < 0.5. OAV/S curve analyses indicated all 16 mixtures had masking effects, with 11 pairs stronger at 42%. Verification validated 53% ethanol mitigated masking, enhancing fruity/grassy aromas by 38.1%/25.0%. This study provides support for QXB dealcoholization flavor regulation. Full article
(This article belongs to the Section Drinks and Liquid Nutrition)
Show Figures

Figure 1

Back to TopTop