Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,652)

Search Parameters:
Keywords = quality assurance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1639 KB  
Article
A Generative AI-Based Framework for Proactive Quality Assurance and Auditing
by Galina Ilieva, Tania Yankova, Vera Hadzhieva and Yuliy Iliev
Appl. Sci. 2026, 16(9), 4237; https://doi.org/10.3390/app16094237 (registering DOI) - 26 Apr 2026
Abstract
Generative artificial intelligence (AI) is increasingly used to support decision-making in manufacturing quality assurance (QA), but its adoption raises concerns regarding governance, traceability, and auditability. This paper proposes a proactive framework that integrates generative AI into quality management and auditing while preserving standards [...] Read more.
Generative artificial intelligence (AI) is increasingly used to support decision-making in manufacturing quality assurance (QA), but its adoption raises concerns regarding governance, traceability, and auditability. This paper proposes a proactive framework that integrates generative AI into quality management and auditing while preserving standards alignment and human oversight. The framework structures quality activities across supplier, in-process, and post-market domains and across three hierarchical levels—product, process, and operation—to link quality outcomes with documentary evidence requirements. A proof-of-concept (PoC) study in electronics manufacturing focused on New Product Introduction (NPI) planning and compared two parallel workflows: an expert QA team and a generative AI-assisted chatbot workflow. Within a fixed time window, both workflows produced an aligned Process Failure Mode and Effects Analysis (PFMEA), Control Plan, supplier Production Part Approval Process (PPAP) request package, and internal audit evidence pack. Three independent experts evaluated the integrated deliverable package using five indices covering documentation quality and audit readiness, detection and containment logic, process capability and stability, governance and provenance safeguards, and execution (time) efficiency. Compared with the expert package, the generative AI–assisted workflow produced more traceable, governance-rich documentation (ownership, versioning, clause-to-evidence links) and reduced manual audit-evidence consolidation, supporting quality planning and change-control readiness. Full article
25 pages, 3884 KB  
Article
Deep-Learning-Based 3D Dose Distribution Prediction for VMAT Lung Cancer Treatment Using an Enhanced UNet3D Architecture with Composite Loss Functions
by Philip Chung Yin Mak, Luoyi Kong and Lawrence Wing Chi Chan
Bioengineering 2026, 13(5), 490; https://doi.org/10.3390/bioengineering13050490 - 23 Apr 2026
Viewed by 250
Abstract
The high complexity of radiation therapy for lung cancer necessitates effective planning of advanced treatments such as Volumetric Modulated Arc Therapy (VMAT) by radiation oncologists. The current VMAT treatment planning process typically involves extensive manual interaction and a time-consuming, trial-and-error, iterative approach that [...] Read more.
The high complexity of radiation therapy for lung cancer necessitates effective planning of advanced treatments such as Volumetric Modulated Arc Therapy (VMAT) by radiation oncologists. The current VMAT treatment planning process typically involves extensive manual interaction and a time-consuming, trial-and-error, iterative approach that requires planners’ experience. This can lead to varying levels of plan quality. To improve the quality of radiotherapy treatment plans quickly and accurately, this research presents a new architecture, Enhanced UNet3D, to generate three-dimensional (3-D) dose distributions for lung cancer patients. Enhanced UNet3D utilises a symmetric encoder–decoder architecture with residual connections and a target region-attention module to achieve high accuracy in dose shaping within the PTV. A new composite objective function, Enhanced Combined Loss (ECLoss), that includes both SharpLoss, a structure-aware DVH-guided loss, and 3D gradient regularisation, has been developed to address voxel-level class imbalance and achieve realistic spatial dose falloff. This research utilised a retrospective dataset of 170 VMAT plans to train and validate the proposed model. On the test set (n = 14), the model demonstrated exceptional overall accuracy, with a Mean Absolute Error (MAE) of 0.238 ± 0.075 Gy and a structural similarity index measure (SSIM) of 0.970 ± 0.005. Moreover, the model can perform near-real-time inference at approximately 0.5 s per patient, representing a significant reduction in computational resources compared to other architectures. Therefore, these results demonstrate that the Enhanced UNet3D model with ECLoss is a clinically feasible tool for the rapid evaluation and quality assurance of radiotherapy treatment plans and may reduce the need for manual trial-and-error in VMAT workflows. Full article
Show Figures

Figure 1

18 pages, 880 KB  
Article
Comparative Evaluation of Five Multimodal Large Language Models for Medical Laboratory Image Recognition: Impact of Prompting Strategies on Diagnostic Accuracy
by Hui-Ru Yang, Kuei-Ying Lin, Ping-Chang Lin, Jih-Jin Tsai and Po-Chih Chen
Diagnostics 2026, 16(9), 1258; https://doi.org/10.3390/diagnostics16091258 - 22 Apr 2026
Viewed by 160
Abstract
Background: Multimodal large language models (MLLMs) show promise in medical imaging, but their performance is highly dependent on prompt engineering. This study systematically evaluates how different prompting strategies affect diagnostic accuracy in clinical laboratory image interpretation. Methods: We evaluated five MLLMs (ChatGPT-4o, Gemini [...] Read more.
Background: Multimodal large language models (MLLMs) show promise in medical imaging, but their performance is highly dependent on prompt engineering. This study systematically evaluates how different prompting strategies affect diagnostic accuracy in clinical laboratory image interpretation. Methods: We evaluated five MLLMs (ChatGPT-4o, Gemini 2.0 Flash, Claude 3.5 Sonnet, Grok-2, and Perplexity Pro (Claude 3.5 Sonnet)) using 177 proficiency testing images across three domains: blood smears (n = 78), urinalysis (n = 50), and parasitology (n = 49). Three prompting approaches were compared: (1) complex multi-choice prompts with 20 diagnostic options, (2) zero-shot open-ended prompts, and (3) two-step descriptive-reasoning prompts. Images were sourced from the Taiwan Society of Laboratory Medicine external quality assurance archives with expert consensus diagnoses. Results: Zero-shot prompting significantly outperformed complex multi-choice prompts across all models and domains (p < 0.001). With zero-shot prompts, Gemini achieved 78.5% overall accuracy (urinalysis: 92.0%; parasitology: 75.5%; blood smears: 64.1%), representing a 17% improvement over complex prompts. Two-step descriptive-reasoning prompts further improved blood smear accuracy by 8–12% for top-performing models, but showed minimal benefit in urinalysis and parasitology. The re-query mechanism (“please reconsider”) improved urinalysis accuracy by 7.6% but had a negligible effect on blood smears and parasitology. Conclusions: Prompting strategy critically determines MLLM diagnostic performance. Zero-shot approaches with minimal constraints consistently outperform complex multi-choice formats. The remarkable performance of general-purpose models in structured domains like urinalysis (>90% accuracy) demonstrates the considerable progress of multimodal AI. However, complex morphological tasks like blood smear interpretation require either specialized prompting techniques or domain-specific fine-tuning. These findings provide evidence-based guidance for optimizing AI integration in clinical laboratories. Full article
32 pages, 825 KB  
Systematic Review
Modular Engineered-Wood Housing in Low-Technification, Seismic-Prone Settings: A Systematic Review of Structural Performance, Digital Fabrication, and Low-Carbon Performance
by Emerson Porras, Walter Morales, Lidia Chang and Joseph Sucasaca
Sustainability 2026, 18(8), 4096; https://doi.org/10.3390/su18084096 - 20 Apr 2026
Viewed by 375
Abstract
This qualitative systematic review evaluates the potential of modular prefabricated OSB/plywood housing systems in low-technification, high-seismicity settings. These systems are promoted as low-carbon options for emerging contexts, and we assess how far the evidence supports that promise and under which conditions they can [...] Read more.
This qualitative systematic review evaluates the potential of modular prefabricated OSB/plywood housing systems in low-technification, high-seismicity settings. These systems are promoted as low-carbon options for emerging contexts, and we assess how far the evidence supports that promise and under which conditions they can contribute to net-zero housing pathways. An adapted PRISMA 2020 workflow was applied to Scopus (TITLE-ABS, 2000–2025); 153 studies were synthesized in a table-first, coded matrix into axes for structural, digital fabrication, sustainability/circularity, and extrapolatable systems—supplemented by curated housing cases—with other EWPs used only for contrast. To address fragmentation and heterogeneity across domains, we developed a domain-based QA/QC instrument (STRUCTURAL, LCA, and FABRICATION) to judge whether studies provide minimally comparable evidence. Structural performance is relatively mature for certain patterns (calibrated FEM, cyclic tests, some 1:1 trials), whereas digital fabrication and LCA evidence remain partial: file-to-factory workflows rarely report verifiable QA/QC traceability, and most LCAs stop at A1–A3 with uneven treatment of A4, C/D, and biogenic carbon. Full convergence of adequate STRUCTURAL, LCA, and FABRICATION evidence within the same system type is rare, so both transferability to low-technification, seismic-prone settings and alignment with net-zero objectives must be characterized as conditional rather than established. The review identifies minimum multi-domain thresholds—technical robustness, whole-life LCA coverage, and verifiable QA/QC—as prerequisites for positioning modular OSB/plywood housing as a credible low-carbon pathway. These conclusions are limited by Scopus-only, English-language coverage and methodological heterogeneity, especially in the LCA. Full article
(This article belongs to the Topic Multiple Roads to Achieve Net-Zero Emissions by 2050)
Show Figures

Figure 1

18 pages, 2182 KB  
Article
Quantitative Evaluation of Pectoral Muscle Visualisation as an Indicator of Positioning Quality in Screening Mammography
by Maja Karić, Doris Šegota Ritoša and Petra Valković Zujić
Diagnostics 2026, 16(8), 1218; https://doi.org/10.3390/diagnostics16081218 - 19 Apr 2026
Viewed by 218
Abstract
Background/Objectives: Image quality of mammograms in breast cancer screening is strongly operator-dependent, particularly in the mediolateral oblique (MLO) projection where adequate visualisation of the pectoralis major muscle serves as a surrogate marker of posterior tissue inclusion. Current positioning assessment is predominantly qualitative and [...] Read more.
Background/Objectives: Image quality of mammograms in breast cancer screening is strongly operator-dependent, particularly in the mediolateral oblique (MLO) projection where adequate visualisation of the pectoralis major muscle serves as a surrogate marker of posterior tissue inclusion. Current positioning assessment is predominantly qualitative and subject to inter-observer variability. This study aimed to quantitatively evaluate pectoral muscle visualisation and compression force variability among radiographers participating in a national screening programme. Methods: A retrospective observational study was conducted at Clinical Hospital Center Rijeka in January and February 2020. A total of 464 digital MLO mammograms were analysed. Images from nine radiographers were randomly retrieved from the institutional Picture Archiving and Communication System (PACS). Pectoral muscle length and width were measured using a standard clinical workstation with an integrated distance measurement tool. Additional variables included radiographer gender, breast side (LMLO vs. RMLO), imaging order, and applied compression force. Statistical analyses included Welch’s ANOVA, one-way ANOVA, t-tests, and appropriate post hoc comparisons. Results: Across all MLO projections, the combined mean pectoral muscle width was 41.0 ± 11.4 mm and the mean length was 134.3 ± 21.7 mm. Significant inter-operator differences were observed in pectoral muscle width (p < 0.001) and length (p = 0.023). Mean muscle width ranged from 35.0 mm to 54.2 mm, and mean length from 126.5 mm to 139.4 mm across radiographers. No significant differences were found with respect to radiographer gender, breast side, or imaging order (all p > 0.05). Compression force differed significantly among radiographers (p < 0.001), ranging from 117.0 ± 18.3 N to 184.8 ± 33.9 N. Conclusions: This study demonstrates significant inter-operator variability in both pectoral muscle visualisation and applied compression force during MLO mammography. These findings indicate that important technical aspects of mammographic examination remain strongly operator-dependent and highlight the need for more consistent positioning practices within screening programmes. Quantitative measurement of pectoral muscle dimensions may serve as a practical and objective approach for monitoring positioning quality and supporting quality assurance in routine clinical practice. Full article
(This article belongs to the Special Issue Recent Advances in Breast Cancer Imaging 2026)
Show Figures

Figure 1

17 pages, 278 KB  
Review
Evaluating University Engagement as Institutional Quality: Between Standardization and Systemic Integration
by Enrique Riquelme Mella and Alfredo Valeria Celedón
Educ. Sci. 2026, 16(4), 649; https://doi.org/10.3390/educsci16040649 - 18 Apr 2026
Viewed by 182
Abstract
The incorporation of university engagement as a mandatory dimension of institutional accreditation has reconfigured the debate on quality in higher education, particularly in regulatory contexts such as Chile. This study develops a narrative review with a comparative analytical approach to examine the evaluative [...] Read more.
The incorporation of university engagement as a mandatory dimension of institutional accreditation has reconfigured the debate on quality in higher education, particularly in regulatory contexts such as Chile. This study develops a narrative review with a comparative analytical approach to examine the evaluative rationalities that structure the assessment of university engagement within national and international quality assurance frameworks. The analysis draws on Chilean regulatory documents and key international models, including the Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG), the HE-BCI system in the United Kingdom, the E3M Project, the Carnegie Community Engagement Classification, and recent literature on the evaluation of complex university–community engagement. The findings identify three structural tensions that organize contemporary evaluative frameworks: (1) standardization versus institutional diversity, reflecting the trade-off between comparability and contextual adequacy; (2) functional reduction versus systemic transversality, associated with the treatment of engagement as a discrete function or as a cross-cutting institutional dimension; and (3) fragmented evaluation versus institutional integration, linked to the degree of articulation between engagement, teaching, research, and governance within quality assurance systems. These tensions reveal that the evaluation of university engagement is not merely a technical issue of indicator design, but a structural problem embedded in institutional architecture and governance. Based on these findings, the article proposes a systemic evaluation model structured around three interrelated dimensions: strategic purpose, relational processes, and differentiated contribution and impact across temporal scales. This model seeks to reconcile the demands for comparability with the relational and contextual complexity of university engagement, while promoting its integration within the institutional quality cycle. The study contributes to positioning the Chilean case within the international debate on the third mission and advances a conceptual framework for evaluating university engagement that moves beyond indicator-based approaches toward a systemic understanding of institutional quality. Full article
(This article belongs to the Special Issue Quality Assessment of Higher Education Institutions)
20 pages, 1174 KB  
Review
Early Detection of Gastric Cancer: Linking Epidemiology, Pathophysiology, and Innovations in Digestive Endoscopy
by Marta La Milia, Mario Capasso, Tommaso Pessarelli, Guido Manfredi and Arnaldo Amato
Diseases 2026, 14(4), 148; https://doi.org/10.3390/diseases14040148 - 18 Apr 2026
Viewed by 312
Abstract
Background/Objectives: Despite substantial progress in understanding its pathophysiology and risk factors, gastric cancer remains a significant global health burden. Advances in endoscopic technology have improved the potential for early detection, yet variability in clinical practice persists. In this comprehensive narrative review, we summarize [...] Read more.
Background/Objectives: Despite substantial progress in understanding its pathophysiology and risk factors, gastric cancer remains a significant global health burden. Advances in endoscopic technology have improved the potential for early detection, yet variability in clinical practice persists. In this comprehensive narrative review, we summarize the most recent epidemiological trends in gastric pre-neoplastic and neoplastic lesions and critically appraise current evidence on optimizing endoscopic techniques and strategies for the detection of early gastric neoplasia, with an emphasis on emerging innovations. Methods: The relevant literature on epidemiology, risk factors, pathophysiology, and endoscopic management of GC was selectively reviewed based on the authors’ expertise and appraisal of contemporary evidence. Results: Marked global disparities persist in GC incidence, mortality, and stage at diagnosis. Interval GC—including missed lesions and so-called “true” interval cancers—remains a clinically relevant challenge and is frequently identified at advanced stages. These gaps are partly attributable to inconsistent quality in diagnostic esophagogastroduodenoscopy (EGD). High-quality EGD relies on adequate mucosal inspection time, systematic photodocumentation, optimal gastric preparation, and the use of standardized terminology, including mucosal visibility scores. Routine integration of chromoendoscopy and magnification techniques further enhances detection rates. Looking ahead, artificial intelligence holds promise as a transformative adjunct to standardize and augment real-time lesion recognition and quality assurance. Conclusions: High-quality endoscopic evaluation, coupled with tailored surveillance strategies, enables earlier detection of pre-neoplastic lesions and early gastric cancer, improving clinical outcomes. Future priorities include broadening access to high-quality endoscopy, harmonizing performance standards, and promoting continuous training alongside technological integration. Full article
Show Figures

Figure 1

27 pages, 3795 KB  
Systematic Review
Defects in Modular Building Construction: A Systematic Lifecycle Review and Implications for Sustainable Delivery
by Argaw Gurmu, Fatemeh Fallah Tafti, Anthony Mills and John Kite
Sustainability 2026, 18(8), 4000; https://doi.org/10.3390/su18084000 - 17 Apr 2026
Viewed by 268
Abstract
Despite its potential to enhance construction quality, efficiency, and sustainability, modular construction continues to experience defects that hinder its broader adoption. Understanding and mitigating defects is essential for maximising the sustainability benefits of modular construction by reducing material waste, minimising rework and improving [...] Read more.
Despite its potential to enhance construction quality, efficiency, and sustainability, modular construction continues to experience defects that hinder its broader adoption. Understanding and mitigating defects is essential for maximising the sustainability benefits of modular construction by reducing material waste, minimising rework and improving lifecycle performance. Existing research remains fragmented, with limited synthesis integrating defects with their root causes across the project lifecycle. To address this gap, this study investigates defect types, lifecycle-based causes, and mitigation strategies in modular building projects through a PRISMA-guided systematic literature review of 61 peer-reviewed journal articles published between 2015 and 2025 and retrieved from Scopus and Web of Science. Six major defect categories were identified: geometric and dimensional; material and component; joint and connection integrity; envelope performance and durability; structural; and mechanical, electrical, and plumbing (MEP) defects, with geometric and dimensional defects emerging as the most prevalent, accounting for 26.7% of reported cases. Lifecycle root-cause mapping indicates that poor workmanship during on-site assembly is the dominant contributor, accounting for 44.1% of identified root causes, with manufacturing errors (26.8%) and design limitations (13.4%) acting as critical upstream sources. Mitigation strategies cluster into three groups: general recommendations (39% of reported strategies), mainly focusing on low-cost organisational measures such as logistics coordination and workforce training; structured risk-management frameworks (9.1%), including assembly sequencing and tolerance planning; and digital and data-driven technologies (51.9%), such as laser scanning, AI-based inspection, and digital twins, enabling proactive quality assurance across the lifecycle. The study proposes an integrated lifecycle–defect–mitigation framework to strengthen quality governance and advance sustainable modular delivery. Full article
Show Figures

Figure 1

15 pages, 3291 KB  
Article
Automated Segmentation of Digital Artifacts in Intraoral Photostimulable Phosphor Radiographs
by Ceyda Gizem Topal, Osman Yalçın, Hatice Tetik, Murat Ünal, Necla Bandirmali Erturk and Cemile Özlem Üçok
Diagnostics 2026, 16(8), 1194; https://doi.org/10.3390/diagnostics16081194 - 16 Apr 2026
Viewed by 222
Abstract
Background/Objectives: Intraoral radiographs acquired using photostimulable phosphor (PSP) plates are inherently susceptible to a wide spectrum of artifacts that can compromise diagnostic reliability and lead to unnecessary repeat exposures. Although structured taxonomies describing these artifacts have been proposed, automated methods capable of [...] Read more.
Background/Objectives: Intraoral radiographs acquired using photostimulable phosphor (PSP) plates are inherently susceptible to a wide spectrum of artifacts that can compromise diagnostic reliability and lead to unnecessary repeat exposures. Although structured taxonomies describing these artifacts have been proposed, automated methods capable of detecting and localizing multiple artifact types at the pixel level remain limited, particularly under realistic multi-class conditions. In this study, we address the problem of fine-grained, multi-class PSP artifact segmentation by systematically evaluating a deep learning-based framework and establishing a realistic baseline for this inherently challenging task. Methods: A retrospective, multi-center dataset comprising 1497 intraoral PSP radiographs (bitewing and periapical) collected from three institutions was analyzed. Pixel-level annotations were generated by expert oral and maxillofacial radiologists according to a standardized taxonomy consisting of four major artifact groups and 29 artifact classes, together with a background class. A 2D nnU-Net v2 architecture was employed as a baseline segmentation model. Model development was performed using 5-fold cross-validation, and performance was evaluated on an independent test set using Dice coefficient, Intersection over Union (IoU), Precision, and Recall. Results: Across all classes, the model achieved a mean Dice score of 0.0894 ± 0.0084 in cross-validation and 0.0952 on the independent test set, reflecting the intrinsic complexity of the task. Class-wise analysis revealed substantial variability, with higher performance in larger and visually distinctive artifacts, whereas small-scale, low-contrast, and underrepresented classes exhibited markedly reduced performance. Notably, several artifact categories were absent from the training data, resulting in a zero-shot scenario that directly constrained model generalization. Furthermore, segmentation performance demonstrated a strong dependency on class frequency, measured in terms of pixel distribution, underscoring the impact of severe class imbalance. Group-based evaluation showed relatively higher performance for pre-exposure and exposure-related artifacts compared to post-exposure and scanner-related categories. Conclusions: These findings demonstrate that large-scale, multi-class pixel-level segmentation of PSP artifacts represents a fundamentally challenging problem shaped by the combined effects of class imbalance, small object size, heterogeneous artifact morphology, and incomplete training representation. While the proposed framework confirms the feasibility of automated artifact localization, its current performance suggests greater immediate value as a quality control or screening support tool rather than a fully autonomous diagnostic system. By providing a comprehensive baseline and systematic analysis, this study establishes a benchmark for future research and highlights the critical need for imbalance-aware learning strategies, hierarchical modeling, and data-centric approaches to advance this field. Full article
Show Figures

Figure 1

20 pages, 1217 KB  
Article
Molecular Labelling Tool for Cereal Genetic Resources Management Derived from Barley and Tetraploid Wheat Genebank-Genomics Projects
by Workie Zegeye, Amanda Burridge, Ajay Siluveru, Simon Orford, Liz Sayers, Richard Goram, Richard Horler, Gary Barker and Noam Chayut
Plants 2026, 15(8), 1219; https://doi.org/10.3390/plants15081219 - 16 Apr 2026
Viewed by 364
Abstract
Globally, 5.94 million accessions are conserved across 867 genebanks, of which 41.5% (2.47 million) are cereal crop accessions. Only a small portion of global germplasm diversity has been marker-genotyped or genome-sequenced. Accurate identification of genebank accessions is essential to improve the efficiency and [...] Read more.
Globally, 5.94 million accessions are conserved across 867 genebanks, of which 41.5% (2.47 million) are cereal crop accessions. Only a small portion of global germplasm diversity has been marker-genotyped or genome-sequenced. Accurate identification of genebank accessions is essential to improve the efficiency and effectiveness of global genebanking. It is crucial for preserving the legacy knowledge associated with the germplasm and for maintaining its value to current plant science and breeding efforts. Existing practices generally fall into two categories: either expensive and complex, or inefficient, labour-intensive, and inaccurate. The first relies on high-resolution genomic sequences or saturated markers, while the second relies on morphological comparisons of regenerated plants with historical records. We propose a genotyping method based on a minimal set of Single Nucleotide Polymorphism (SNP) markers and exemplify its use on a genebank scale. We identified a small, effective set of SNPs that can differentiate between the global diversity of genebank accessions of barley (Hordeum vulgare and Hordeum spontaneum) and tetraploid wheat collections (Triticum turgidum) maintained at the Germplasm Resources National Capability at the John Innes Centre, UK. This approach offers a straightforward, automatable, and inexpensive alternative to traditional genebank crop descriptors used during seed regeneration and distribution. By establishing the minimal genomic resolution needed to distinguish genetically distinct accessions, we show that as few as 24 and 25 carefully chosen SNP markers for barley and durum wheat, respectively, can effectively differentiate individual accessions. Unlike morphology-based identification, which can detect mislabelling or contamination but often cannot prevent or correct such errors, our SNP-based molecular labelling enables error correction and the retrieval of lost germplasm identity. This study highlights how accuracy and reliability in germplasm management can be improved without costly whole-genome sequencing or resource-intensive analysis. We discuss the impact of this method on enhancing quality assurance in genebanks and its broader usefulness for the user community. Full article
(This article belongs to the Section Plant Genetic Resources)
Show Figures

Figure 1

9 pages, 205 KB  
Article
Variety of Neuropsychological Deficits and Clinical Rehabilitation Course After Surgical Removal of Cerebral Meningioma Under Neuropsychological Therapy
by Stefanie Auer, Peter Gugel, Natalie Gdynia, Andreas Gratzer, Ingo Haase and Hans-Jürgen Gdynia
Brain Sci. 2026, 16(4), 416; https://doi.org/10.3390/brainsci16040416 - 15 Apr 2026
Viewed by 235
Abstract
Background: Meningiomas (MG) are the most common form of benign intracranial tumors. Neuropsychological deficits are often noticed preoperatively. After surgical removal, both improvements and persistent neuropsychological deficits have been reported. Here we present the neuropsychological characteristics of a larger patient group following acute [...] Read more.
Background: Meningiomas (MG) are the most common form of benign intracranial tumors. Neuropsychological deficits are often noticed preoperatively. After surgical removal, both improvements and persistent neuropsychological deficits have been reported. Here we present the neuropsychological characteristics of a larger patient group following acute treatment for meningioma. Methods: This retrospective study is part of an overall project investigating the postoperative characteristics and rehabilitation outcomes of 151 patients following surgical removal of MG. Patients were recruited at the neurological department of m&i-Fachklinik Enzensberg between 2019 and 2024. In addition to demographic data and tumor characteristics, the neuropsychological reports were evaluated by two experienced (neuro)psychologists. Results: 69 patients underwent standardized testing in the neuropsychology department and were thus included in the analysis. Upon admission, 52.2% of these patients exhibited attention deficits, 48% showed executive deficits, and 44% had memory impairments. No correlation was found between the extent of resection or the occurrence of complications during surgery and cognitive deficits. However, there was a trend showing that higher-grade tumors were more likely to cause cognitive impairment. The location of the tumor did not correlate with the impaired cognitive domains. At discharge, fewer patients exhibited attention deficits, and those that did had less severe symptoms. Conclusions: Meningiomas are considered to be easily treatable. However, our data show that neuropsychological impairments frequently occur after acute treatment, which may not be given sufficient attention in practice. Even mild cognitive impairments can lead to problems in everyday life or at work. We therefore recommend detailed neuropsychological diagnosis and, if necessary, therapy for all patients after acute treatment. Full article
(This article belongs to the Special Issue Outcome Measures in Rehabilitation)
30 pages, 7597 KB  
Article
Assessment of the Impact of Thermal Springs on Surface Water Quality in the Soummam Watershed (Algeria)
by Youcef Rassoul, Ali Berreksi, Mustapha Maza, Lazhar Belkhiri, Hamdi Bendif, Mohamed A. M. Ali and Lotfi Mouni
Water 2026, 18(8), 944; https://doi.org/10.3390/w18080944 - 15 Apr 2026
Viewed by 1307
Abstract
This study presents the first watershed-scale assessment of the impact of thermal spring discharges on the hydrochemistry and water quality of the Soummam basin (northeastern Algeria). Fourteen stations were monitored during three campaigns (October 2024, December 2024 and March 2025), combining physicochemical analyses, [...] Read more.
This study presents the first watershed-scale assessment of the impact of thermal spring discharges on the hydrochemistry and water quality of the Soummam basin (northeastern Algeria). Fourteen stations were monitored during three campaigns (October 2024, December 2024 and March 2025), combining physicochemical analyses, hydrochemical diagrams, and water quality indices (WQI and IWQI). The results reveal a clear spatial gradient in water composition, from low-mineral Ca-HCO3/Ca-SO4 facies in upstream areas to highly mineralized Na-Cl facies associated with thermal springs (Sidi Yahia and Sillal). Electrical conductivity reaches up to 27,359 µS/cm, reflecting intense mineralization driven by evaporite dissolution and deep water–rock interaction. This thermomineral signature propagates downstream through mixing and ion exchange processes, leading to progressive salinity enrichment. Water quality indices highlight significant degradation in thermally influenced zones, with approximately 50% of samples unsuitable for drinking (WQI > 300) and more than 60% classified as highly restricted for irrigation (IWQI < 40). Cluster analysis further confirms the distinction between severely impacted, moderately affected, and relatively preserved waters. Overall, the findings demonstrate that thermal discharges represent a major and persistent driver of salinization, emphasizing the need to incorporate geothermal influences into water resource management strategies in semi-arid environments. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

12 pages, 233 KB  
Article
Analysis of Interrater Reliability and Interpretive Discrepancies in Polysomnography Scoring Across Clinical Subgroups
by Ji Ho Choi, Tae Kyoung Ha, Ji Eun Moon and Seockhoon Chung
Life 2026, 16(4), 669; https://doi.org/10.3390/life16040669 - 14 Apr 2026
Viewed by 276
Abstract
Background: Polysomnography (PSG) is the gold standard for diagnosing sleep disorders. However, the subjectivity of manual scoring can lead to inter-scorer variability, undermining diagnostic accuracy and subsequent clinical decisions. This study aims to quantitatively assess scoring concordance among multiple scorers across various clinical [...] Read more.
Background: Polysomnography (PSG) is the gold standard for diagnosing sleep disorders. However, the subjectivity of manual scoring can lead to inter-scorer variability, undermining diagnostic accuracy and subsequent clinical decisions. This study aims to quantitatively assess scoring concordance among multiple scorers across various clinical subgroups to identify the factors that contribute to interpretive discrepancies. Methods: We conducted a retrospective analysis of overnight diagnostic PSG data from adult patients at a tertiary university hospital sleep center. Interrater reliability was evaluated by three independent expert scorers for 30 subjects selected through stratified random sampling. The polysomnographic data were independently and blindly scored according to the American Academy of Sleep Medicine criteria, focusing on sleep stages, arousals, respiratory events, and leg movements, all scored in 30 s epochs. Interrater agreement was measured using Fleiss’ κ, along with 95% confidence intervals, and included subgroup analyses by diagnostic category. Results: The analysis included a total of 28,291 epochs from 30 adults across normal, insomnia, obstructive sleep apnea (OSA) [mild–severe], and periodic limb movement (PLM) disorder subgroups. The overall interrater agreement for sleep staging among the three scorers was nearly perfect (Fleiss’ κ = 0.932), with the highest concordance observed in stages W, N2, and R, and excellent agreement in stages N1 and N3. Respiratory events showed particularly high reliability, with near-perfect agreement for apnea (κ = 0.955) and substantial agreement for hypopnea, arousals, and PLMs. Pairwise analyses indicated the highest concordance between scorer 1 and scorer 3, while the agreement between scorer 1 and scorer 2 was lower, particularly for detecting arousals and limb movements. Subgroup analyses showed the highest and most stable agreement in moderate OSA, whereas severe OSA exhibited reduced reliability for sleep staging and arousal scoring, indicating increased scoring complexity with greater sleep fragmentation. Conclusions: Although expert PSG scoring demonstrates high overall reliability, significant variability persists in complex cases like severe OSA. These findings underscore the necessity for structured quality assurance and automated tools to improve diagnostic consistency in clinical practice. Full article
19 pages, 12679 KB  
Article
Lightweight Semantic-Guided FCOS for In-Line Micro-Defect Inspection in Semiconductor Manufacturing
by Tao Zhang, Shichang Yan and Gaoe Qin
Micromachines 2026, 17(4), 473; https://doi.org/10.3390/mi17040473 - 14 Apr 2026
Viewed by 340
Abstract
The relentless miniaturization of semiconductor components and Printed Circuit Boards (PCBs) has rendered Automated Optical Inspection (AOI) of micro-defects a critical bottleneck in modern manufacturing and metrology. While in-line inspection systems offer economically viable and scalable quality control solutions, they impose stringent constraints [...] Read more.
The relentless miniaturization of semiconductor components and Printed Circuit Boards (PCBs) has rendered Automated Optical Inspection (AOI) of micro-defects a critical bottleneck in modern manufacturing and metrology. While in-line inspection systems offer economically viable and scalable quality control solutions, they impose stringent constraints on both inference latency and detection robustness—particularly for diminutive, sparsely distributed defects (e.g., mouse bites, pinholes) amidst complex, repetitive circuit topologies. To bridge this gap, we present a semantic-enhanced FCOS framework specifically engineered for micro-defect inspection. Our approach introduces two synergistic innovations: (1) a Semantic-Guided Upsampling Unit (SGU) that adaptively reweights channel–spatial features to reconcile the semantic disparity between shallow textural details and deep contextual representations; and (2) a Sparse Center-ness Calibration (SCC) module that enforces high-confidence, spatially sparse supervision to sharpen localization precision and suppress false positives. The SGU is integrated within a Progressive Semantic-Enhanced Feature Pyramid Network (PSE-FPN) that extends multi-scale representations to stride-4 (P2) resolution, while the SCC module is embedded directly into the detection head. Comprehensive evaluations on MS COCO and the real-world DeepPCB dataset validate the efficacy of our design. On COCO, our model achieves 41.8% AP with real-time throughput of 28 FPS on a single NVIDIA 1080Ti GPU. A lightweight variant further attains 41.6% AP at 42 FPS, accommodating high-throughput production environments. For PCB defect detection, the framework delivers 98.7% mAP@0.5, substantially outperforming contemporary detectors. These results demonstrate that semantics-aware, lightweight architectures enable scalable, real-time quality assurance in semiconductor manufacturing. Full article
(This article belongs to the Special Issue Emerging Technologies and Applications for Semiconductor Industry)
Show Figures

Figure 1

15 pages, 3426 KB  
Article
Rapid and Non-Destructive Detection of Moisture Content in Dried Areca Nuts Based on Near-Infrared Spectroscopy Combined with Machine Learning
by Jiahui Dai, Shiping Wang, Xin Gan, Yanan Wang, Wenting Dai, Xiaoning Kang and Ling-Yan Su
Foods 2026, 15(8), 1359; https://doi.org/10.3390/foods15081359 - 14 Apr 2026
Viewed by 283
Abstract
Moisture content is a key quality attribute in dried areca nuts, affecting subsequent processing performance and storage stability, yet routine measurement by oven-drying is time-consuming and destructive. This study developed a rapid and non-destructive method for determining moisture content in dried areca nuts [...] Read more.
Moisture content is a key quality attribute in dried areca nuts, affecting subsequent processing performance and storage stability, yet routine measurement by oven-drying is time-consuming and destructive. This study developed a rapid and non-destructive method for determining moisture content in dried areca nuts by integrating near-infrared spectroscopy with chemometric and machine learning-assisted methodologies. Various spectral preprocessing methods, feature wavelength selection algorithms, and modeling approaches were compared. The results indicated that Multiplicative Scatter Correction (MSC) most effectively eliminated physical scattering interference. The Partial Least Squares Regression (PLSR) model established using full-wavelength spectra demonstrated optimal predictive performance. It achieved a coefficient of determination for the prediction set (Rp2), root mean square error of prediction (RMSEP), and residual predictive deviation (RPD) of 0.9639, 0.1960, and 10.3461, respectively, indicating excellent predictive accuracy and robustness. Feature wavelength selection did not enhance model performance in this study, which can be attributed to the broad absorption bands of water in the near-infrared spectrum and its complex interactions with the sample matrix where the full spectrum data retains essential information more comprehensively. This research provides a reliable and practical technical means for moisture management in areca nuts, offering important support for quality assurance and standardized production practices within the areca industry. Full article
Show Figures

Graphical abstract

Back to TopTop