Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (731)

Search Parameters:
Keywords = quality assurance system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1639 KB  
Article
A Generative AI-Based Framework for Proactive Quality Assurance and Auditing
by Galina Ilieva, Tania Yankova, Vera Hadzhieva and Yuliy Iliev
Appl. Sci. 2026, 16(9), 4237; https://doi.org/10.3390/app16094237 - 26 Apr 2026
Viewed by 185
Abstract
Generative artificial intelligence (AI) is increasingly used to support decision-making in manufacturing quality assurance (QA), but its adoption raises concerns regarding governance, traceability, and auditability. This paper proposes a proactive framework that integrates generative AI into quality management and auditing while preserving standards [...] Read more.
Generative artificial intelligence (AI) is increasingly used to support decision-making in manufacturing quality assurance (QA), but its adoption raises concerns regarding governance, traceability, and auditability. This paper proposes a proactive framework that integrates generative AI into quality management and auditing while preserving standards alignment and human oversight. The framework structures quality activities across supplier, in-process, and post-market domains and across three hierarchical levels—product, process, and operation—to link quality outcomes with documentary evidence requirements. A proof-of-concept (PoC) study in electronics manufacturing focused on New Product Introduction (NPI) planning and compared two parallel workflows: an expert QA team and a generative AI-assisted chatbot workflow. Within a fixed time window, both workflows produced an aligned Process Failure Mode and Effects Analysis (PFMEA), Control Plan, supplier Production Part Approval Process (PPAP) request package, and internal audit evidence pack. Three independent experts evaluated the integrated deliverable package using five indices covering documentation quality and audit readiness, detection and containment logic, process capability and stability, governance and provenance safeguards, and execution (time) efficiency. Compared with the expert package, the generative AI–assisted workflow produced more traceable, governance-rich documentation (ownership, versioning, clause-to-evidence links) and reduced manual audit-evidence consolidation, supporting quality planning and change-control readiness. Full article
18 pages, 2182 KB  
Article
Quantitative Evaluation of Pectoral Muscle Visualisation as an Indicator of Positioning Quality in Screening Mammography
by Maja Karić, Doris Šegota Ritoša and Petra Valković Zujić
Diagnostics 2026, 16(8), 1218; https://doi.org/10.3390/diagnostics16081218 - 19 Apr 2026
Viewed by 249
Abstract
Background/Objectives: Image quality of mammograms in breast cancer screening is strongly operator-dependent, particularly in the mediolateral oblique (MLO) projection where adequate visualisation of the pectoralis major muscle serves as a surrogate marker of posterior tissue inclusion. Current positioning assessment is predominantly qualitative and [...] Read more.
Background/Objectives: Image quality of mammograms in breast cancer screening is strongly operator-dependent, particularly in the mediolateral oblique (MLO) projection where adequate visualisation of the pectoralis major muscle serves as a surrogate marker of posterior tissue inclusion. Current positioning assessment is predominantly qualitative and subject to inter-observer variability. This study aimed to quantitatively evaluate pectoral muscle visualisation and compression force variability among radiographers participating in a national screening programme. Methods: A retrospective observational study was conducted at Clinical Hospital Center Rijeka in January and February 2020. A total of 464 digital MLO mammograms were analysed. Images from nine radiographers were randomly retrieved from the institutional Picture Archiving and Communication System (PACS). Pectoral muscle length and width were measured using a standard clinical workstation with an integrated distance measurement tool. Additional variables included radiographer gender, breast side (LMLO vs. RMLO), imaging order, and applied compression force. Statistical analyses included Welch’s ANOVA, one-way ANOVA, t-tests, and appropriate post hoc comparisons. Results: Across all MLO projections, the combined mean pectoral muscle width was 41.0 ± 11.4 mm and the mean length was 134.3 ± 21.7 mm. Significant inter-operator differences were observed in pectoral muscle width (p < 0.001) and length (p = 0.023). Mean muscle width ranged from 35.0 mm to 54.2 mm, and mean length from 126.5 mm to 139.4 mm across radiographers. No significant differences were found with respect to radiographer gender, breast side, or imaging order (all p > 0.05). Compression force differed significantly among radiographers (p < 0.001), ranging from 117.0 ± 18.3 N to 184.8 ± 33.9 N. Conclusions: This study demonstrates significant inter-operator variability in both pectoral muscle visualisation and applied compression force during MLO mammography. These findings indicate that important technical aspects of mammographic examination remain strongly operator-dependent and highlight the need for more consistent positioning practices within screening programmes. Quantitative measurement of pectoral muscle dimensions may serve as a practical and objective approach for monitoring positioning quality and supporting quality assurance in routine clinical practice. Full article
(This article belongs to the Special Issue Recent Advances in Breast Cancer Imaging 2026)
Show Figures

Figure 1

17 pages, 278 KB  
Review
Evaluating University Engagement as Institutional Quality: Between Standardization and Systemic Integration
by Enrique Riquelme Mella and Alfredo Valeria Celedón
Educ. Sci. 2026, 16(4), 649; https://doi.org/10.3390/educsci16040649 - 18 Apr 2026
Viewed by 202
Abstract
The incorporation of university engagement as a mandatory dimension of institutional accreditation has reconfigured the debate on quality in higher education, particularly in regulatory contexts such as Chile. This study develops a narrative review with a comparative analytical approach to examine the evaluative [...] Read more.
The incorporation of university engagement as a mandatory dimension of institutional accreditation has reconfigured the debate on quality in higher education, particularly in regulatory contexts such as Chile. This study develops a narrative review with a comparative analytical approach to examine the evaluative rationalities that structure the assessment of university engagement within national and international quality assurance frameworks. The analysis draws on Chilean regulatory documents and key international models, including the Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG), the HE-BCI system in the United Kingdom, the E3M Project, the Carnegie Community Engagement Classification, and recent literature on the evaluation of complex university–community engagement. The findings identify three structural tensions that organize contemporary evaluative frameworks: (1) standardization versus institutional diversity, reflecting the trade-off between comparability and contextual adequacy; (2) functional reduction versus systemic transversality, associated with the treatment of engagement as a discrete function or as a cross-cutting institutional dimension; and (3) fragmented evaluation versus institutional integration, linked to the degree of articulation between engagement, teaching, research, and governance within quality assurance systems. These tensions reveal that the evaluation of university engagement is not merely a technical issue of indicator design, but a structural problem embedded in institutional architecture and governance. Based on these findings, the article proposes a systemic evaluation model structured around three interrelated dimensions: strategic purpose, relational processes, and differentiated contribution and impact across temporal scales. This model seeks to reconcile the demands for comparability with the relational and contextual complexity of university engagement, while promoting its integration within the institutional quality cycle. The study contributes to positioning the Chilean case within the international debate on the third mission and advances a conceptual framework for evaluating university engagement that moves beyond indicator-based approaches toward a systemic understanding of institutional quality. Full article
(This article belongs to the Special Issue Quality Assessment of Higher Education Institutions)
19 pages, 12679 KB  
Article
Lightweight Semantic-Guided FCOS for In-Line Micro-Defect Inspection in Semiconductor Manufacturing
by Tao Zhang, Shichang Yan and Gaoe Qin
Micromachines 2026, 17(4), 473; https://doi.org/10.3390/mi17040473 - 14 Apr 2026
Viewed by 369
Abstract
The relentless miniaturization of semiconductor components and Printed Circuit Boards (PCBs) has rendered Automated Optical Inspection (AOI) of micro-defects a critical bottleneck in modern manufacturing and metrology. While in-line inspection systems offer economically viable and scalable quality control solutions, they impose stringent constraints [...] Read more.
The relentless miniaturization of semiconductor components and Printed Circuit Boards (PCBs) has rendered Automated Optical Inspection (AOI) of micro-defects a critical bottleneck in modern manufacturing and metrology. While in-line inspection systems offer economically viable and scalable quality control solutions, they impose stringent constraints on both inference latency and detection robustness—particularly for diminutive, sparsely distributed defects (e.g., mouse bites, pinholes) amidst complex, repetitive circuit topologies. To bridge this gap, we present a semantic-enhanced FCOS framework specifically engineered for micro-defect inspection. Our approach introduces two synergistic innovations: (1) a Semantic-Guided Upsampling Unit (SGU) that adaptively reweights channel–spatial features to reconcile the semantic disparity between shallow textural details and deep contextual representations; and (2) a Sparse Center-ness Calibration (SCC) module that enforces high-confidence, spatially sparse supervision to sharpen localization precision and suppress false positives. The SGU is integrated within a Progressive Semantic-Enhanced Feature Pyramid Network (PSE-FPN) that extends multi-scale representations to stride-4 (P2) resolution, while the SCC module is embedded directly into the detection head. Comprehensive evaluations on MS COCO and the real-world DeepPCB dataset validate the efficacy of our design. On COCO, our model achieves 41.8% AP with real-time throughput of 28 FPS on a single NVIDIA 1080Ti GPU. A lightweight variant further attains 41.6% AP at 42 FPS, accommodating high-throughput production environments. For PCB defect detection, the framework delivers 98.7% mAP@0.5, substantially outperforming contemporary detectors. These results demonstrate that semantics-aware, lightweight architectures enable scalable, real-time quality assurance in semiconductor manufacturing. Full article
(This article belongs to the Special Issue Emerging Technologies and Applications for Semiconductor Industry)
Show Figures

Figure 1

7 pages, 1325 KB  
Proceeding Paper
Determining the Freshness of Milkfish (Chanos chanos) Using Electronic Nose
by John Paulo D. Fernandez, Juhyoung Lee and Meo Vincent C. Caya
Eng. Proc. 2026, 134(1), 44; https://doi.org/10.3390/engproc2026134044 - 13 Apr 2026
Viewed by 265
Abstract
Milkfish (Chanos chanos), a widely consumed fish in the Philippines, is highly perishable, and conventional freshness assessments based on physical and olfactory inspection are often subjective and unreliable. To address this, we introduce an electronic nose system for the accurate classification [...] Read more.
Milkfish (Chanos chanos), a widely consumed fish in the Philippines, is highly perishable, and conventional freshness assessments based on physical and olfactory inspection are often subjective and unreliable. To address this, we introduce an electronic nose system for the accurate classification of milkfish freshness based on spoilage-related gas emissions, namely methane, ammonia, hydrogen sulfide, and trimethylamine. The system integrates the MQ-series sensors and Taguchi gas sensor with Arduino Nano and Raspberry Pi 5 for data acquisition and signal processing. The k-nearest neighbor algorithm was used for classification, and its performance was evaluated using a confusion matrix. The data was gathered from 100 samples, consisting of 50 fresh and 50 spoiled fish. The evaluation demonstrated a peak classification accuracy of 92% for k-values between 1 and 15, confirming the system’s reliability. These findings indicate the system’s potential as a practical, low-cost, and efficient tool for enhancing consumer safety and quality assurance in the fish supply chain. Full article
Show Figures

Figure 1

29 pages, 2099 KB  
Review
Downstream Purification Strategies for Virus-like Particles: A Systematic Review of Structure Preservation, Impurity Control, and Viral Safety
by Jingchao Zhang and Chen Chen
Microorganisms 2026, 14(4), 858; https://doi.org/10.3390/microorganisms14040858 - 10 Apr 2026
Viewed by 771
Abstract
Virus-like particles (VLPs), nanoscale self-assembled structures lacking viral genetic material, have emerged as a versatile platform for vaccines, targeted delivery systems, and gene-editing applications owing to their strong immunogenicity, favorable biosafety profile, and high engineerability. However, the complex architecture of VLPs, their significant [...] Read more.
Virus-like particles (VLPs), nanoscale self-assembled structures lacking viral genetic material, have emerged as a versatile platform for vaccines, targeted delivery systems, and gene-editing applications owing to their strong immunogenicity, favorable biosafety profile, and high engineerability. However, the complex architecture of VLPs, their significant size heterogeneity, and the diversity of process- and product-related impurities generated in different expression systems make downstream purification a major bottleneck limiting product quality, yield, and manufacturability. This review systematically discusses advanced downstream purification strategies for VLPs from the perspective of three major objectives: preservation of structure and biological activity, control of product heterogeneity, and assurance of viral safety. First, strategies for maintaining VLP integrity and function are examined, including optimization of solution conditions, adoption of gentle yet efficient separation operations, and integration of process analytical technology (PAT) to reduce process-induced damage. Second, the review summarizes multi-step purification approaches—spanning clarification, ultrafiltration/diafiltration (UF/DF), chromatography, and disassembly/reassembly—to remove host cell proteins, host cell DNA, and product-related impurities while improving particle homogeneity and stability. Third, viral safety is discussed primarily from the perspective of downstream virus clearance under host-dependent risk, with particular attention to orthogonal clearance steps tailored to VLP properties and expression systems such as CHO cells and insect cell–baculovirus platforms. Overall, this review provides a CQA-oriented framework and practical guidance for the development of robust, scalable, and GMP-compliant downstream purification processes for VLP-based products. Full article
(This article belongs to the Collection Feature Papers in Virology)
Show Figures

Figure 1

15 pages, 806 KB  
Article
Relational Capacity and Fragmented Authority: Coordination and Power in Indonesia’s Decentralized Regulatory Governance
by Heny Sulistiyowati, Muhammad Saleh S. Ali and Imam Mujahidin Fahmid
Sustainability 2026, 18(8), 3780; https://doi.org/10.3390/su18083780 - 10 Apr 2026
Viewed by 375
Abstract
This study examines how coordination, power, and interdependence shape regulatory governance in the decentralized edible bird’s nest (EBN) sector in Pulang Pisau, Indonesia. While decentralization is often associated with improved responsiveness and local adaptability, it frequently produces fragmented regulatory systems in which authority [...] Read more.
This study examines how coordination, power, and interdependence shape regulatory governance in the decentralized edible bird’s nest (EBN) sector in Pulang Pisau, Indonesia. While decentralization is often associated with improved responsiveness and local adaptability, it frequently produces fragmented regulatory systems in which authority is distributed without effective coordination. Using an actor-centered qualitative design combined with the MACTOR method, this study analyzes influence–dependence relations, objective alignment, and coordination bottlenecks across key actors. The findings show that regulatory performance is shaped less by formal mandates than by relational positioning within the governance system. Actors controlling technical verification and documentary gateways occupy high-influence positions, while licensing authorities remain operationally dependent. Although most actors share common objectives—such as hygiene, quality assurance, and traceability—these are pursued through fragmented procedures, resulting in coordination failures and regulatory inequality. Producers bear the greatest compliance burdens despite having limited influence over regulatory processes. The study introduces the concept of relational administrative capacity to explain how decentralized governance outcomes depend on the alignment of authority, expertise, and procedural sequencing across interdependent actors. The findings suggest that improving regulatory performance requires strengthening coordination architectures rather than adding new rules. Full article
Show Figures

Figure 1

27 pages, 972 KB  
Article
A Structural Equation Modelling Approach to Improving Progress Payment Systems Through Common Data Environment (CDE) Implementation
by Reneiloe Malomane, Innocent Musonda and Rehema Joseph Monko
Buildings 2026, 16(7), 1415; https://doi.org/10.3390/buildings16071415 - 3 Apr 2026
Viewed by 376
Abstract
The construction industry in South Africa faces challenges with the current payment system used to manage progress payments. Contractors often experience delays in progress payments for completed works. These late payments stem from the improper management of progress payment procedures, namely, information, communication, [...] Read more.
The construction industry in South Africa faces challenges with the current payment system used to manage progress payments. Contractors often experience delays in progress payments for completed works. These late payments stem from the improper management of progress payment procedures, namely, information, communication, and collaboration, as well as corruption. This study proposes the integration of common data environment (CDE) as it has emerged central in managing information, improving communication and collaboration in a transparent manner. However, the implementation of CDE is facing challenges in the industry. Therefore, the study aimed at developing a model based on the implementation of CDE to uphold efficiency in the management of payment systems for progress payments. A systematic review was conducted to examine the enabling factors, characteristics of CDE in managing progress payment challenges, and benefits of integrating a payment system in a CDE platform. Furthermore, the study utilised questionnaire surveys to purposively collect data from construction professionals who implemented CDE in their projects. From 201 valid questionnaire responses, a structural equation model was developed; testing for the reliability, validity, model fit, and hypotheses was conducted using AMOS and ADANCO. The findings revealed that enabling factors such as quality technology and quality assurance team are the strongest enablers, followed by training and policy. The findings further predict that CDE integration will improve the management of the payment system by 0.589. The study provides theoretical and practical guidance for researchers, policy makers, and construction professionals seeking to strengthen CDE-based payment system frameworks in South Africa. Furthermore, it is recommended to adopt the method of questionnaire surveys and SEM to validate variables and establish their influence on one another to improve generalisation. Full article
(This article belongs to the Special Issue Research on BIM—Integrated Construction Operation Simulation)
Show Figures

Figure 1

26 pages, 833 KB  
Article
Design of a RAG-Based Customer Service Chatbot Enhanced with Knowledge Graph and GPT Evaluation: A Case Study in the Import Trade Industry
by Nien-Lin Hsueh and Wei-Che Lin
Software 2026, 5(2), 15; https://doi.org/10.3390/software5020015 - 2 Apr 2026
Viewed by 758
Abstract
Amid the wave of digital transformation and customer service automation, traditional chatbots are increasingly challenged by their inability to handle unstructured data and complex queries. This issue is particularly critical in the import trade industry, where customer service representatives must respond promptly to [...] Read more.
Amid the wave of digital transformation and customer service automation, traditional chatbots are increasingly challenged by their inability to handle unstructured data and complex queries. This issue is particularly critical in the import trade industry, where customer service representatives must respond promptly to diverse inquiries involving quality anomalies, order tracking, and product substitution. Existing rule-based or keyword-driven chatbots often fail to provide accurate responses, resulting in reduced customer satisfaction and increased operational burdens. This study proposes and implements a “Retrieval-Augmented Generation (RAG)-based Customer Service Chatbot,” integrating the RAG framework with a Neo4j-based knowledge graph, specifically tailored for the import trade domain. The system constructs a dedicated QA dataset, knowledge graph, and dynamic learning mechanism. It semantically vectorizes internal documents, meeting records, quality assurance procedures, and historical dialogues, establishing interrelated knowledge nodes to enhance the chatbot’s comprehension and response accuracy. The study also incorporates GPT-based response evaluation and a high-score caching strategy, enabling dynamic learning and knowledge enhancement. Experiments were conducted using 101 representative enterprise-level queries across six categories, reflecting real-world operational scenarios and inquiry needs. The results demonstrate that the combination of knowledge graphs and RAG technology effectively reduces AI hallucinations and improves response coverage and accuracy, thereby addressing complex problems in customer service applications. This paper not only presents a feasible AI implementation model for the import trading industry but also offers a practical architectural reference for domain-specific knowledge management in the import trade and allied sectors. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

43 pages, 1140 KB  
Review
Industry 4.0-Enabled Friction Stir Welding: A Review of Intelligent Joining for Aerospace and Automotive Applications
by Sipokazi Mabuwa, Katleho Moloi and Velaphi Msomi
Metals 2026, 16(4), 390; https://doi.org/10.3390/met16040390 - 1 Apr 2026
Viewed by 588
Abstract
Friction stir welding (FSW) is a critical solid-state joining process for lightweight and high-performance metallic structures, particularly in aerospace and automotive manufacturing, yet conventional implementations remain largely dependent on offline parameter optimization and open-loop control. The purpose of this review is to examine [...] Read more.
Friction stir welding (FSW) is a critical solid-state joining process for lightweight and high-performance metallic structures, particularly in aerospace and automotive manufacturing, yet conventional implementations remain largely dependent on offline parameter optimization and open-loop control. The purpose of this review is to examine how Industry 4.0 technologies enable the transition of FSW from a parameter-driven process into an intelligent, adaptive, and increasingly autonomous manufacturing capability. A structured review methodology was employed, including systematic literature selection and synthesis of recent research on smart sensing, industrial internet of things (IIoT), data analytics, machine learning, digital twins, automation, robotics, and human–machine interaction in FSW. The review reveals that Industry 4.0 integration enables real-time process monitoring, predictive quality assurance, closed-loop control, and virtual process optimization, resulting in improved weld quality, reliability, productivity, and scalability. Significant benefits are observed for safety-critical aerospace components and high-throughput automotive production, where adaptability and consistency are essential. However, persistent challenges remain in data standardization, model generalization, real-time digital twin integration, interoperability, cybersecurity, and workforce readiness. This review concludes that addressing these challenges through interdisciplinary research, standardization efforts, and human-centered system design is essential for enabling adaptive and data-driven FSW systems. The findings position intelligent FSW as a foundational technology for smart, resilient, and sustainable metal manufacturing in the Industry 4.0 era. Full article
(This article belongs to the Section Welding and Joining)
Show Figures

Figure 1

31 pages, 2259 KB  
Review
Molecular Monitoring in Soil Bioremediation: From Genetic Potential to Verified Pathway Operation
by Mariusz Cycoń
Int. J. Mol. Sci. 2026, 27(7), 3111; https://doi.org/10.3390/ijms27073111 - 29 Mar 2026
Viewed by 434
Abstract
Sequence-based tools have greatly improved the molecular description of soil bioremediation, but detection alone cannot confirm that a contaminant is being degraded by a defined pathway. In soils, bioavailability limitations, redox microsites, relic DNA, gene mobility, and community restructuring can decouple gene presence [...] Read more.
Sequence-based tools have greatly improved the molecular description of soil bioremediation, but detection alone cannot confirm that a contaminant is being degraded by a defined pathway. In soils, bioavailability limitations, redox microsites, relic DNA, gene mobility, and community restructuring can decouple gene presence from reaction flux. This review synthesizes an operational framework that separates three inferential levels: pathway potential, in situ activity, and verified pathway operation. The framework links inoculant fate, functional gene abundance, gene expression, pathway reconstruction, stable isotope probing, and targeted chemical analysis under explicit quality assurance, quality control, and decision rules. Particular attention is given to distinguishing parent compound loss from mineralization and detoxification and to using isotopic attribution when functional redundancy or inoculant-native overlap obscures agency. Instead of being presented as conceptually new, these principles are organized into a practical workflow for soil systems. This structure clarifies what can be discerned from genes, transcripts, proteins, metabolites, and transformation products at each evidentiary tier and provides a conservative basis for integrating multi-omics with mechanistic and quantitative interpretation. Full article
(This article belongs to the Collection Latest Review Papers in Molecular Microbiology)
Show Figures

Figure 1

40 pages, 4626 KB  
Review
A Systematic Lifecycle-Referenced Capability Mapping of MLOps Platforms for Energy Forecasting
by Xun Zhao, Zheng Grace Ma and Bo Nørregaard Jørgensen
Information 2026, 17(4), 328; https://doi.org/10.3390/info17040328 - 28 Mar 2026
Viewed by 510
Abstract
Accurate energy forecasting is essential for maintaining power system reliability, integrating renewable generation, and ensuring market stability. Although machine learning has improved forecasting accuracy, its operational deployment depends on Machine Learning Operations (MLOps) platforms that automate and scale the entire lifecycle of energy [...] Read more.
Accurate energy forecasting is essential for maintaining power system reliability, integrating renewable generation, and ensuring market stability. Although machine learning has improved forecasting accuracy, its operational deployment depends on Machine Learning Operations (MLOps) platforms that automate and scale the entire lifecycle of energy data pipelines. However, the capabilities of existing MLOps platforms for energy forecasting have not been systematically compared. This study adopts a PRISMA-informed review process to identify relevant end-to-end MLOps platforms for energy forecasting and then maps their documented capabilities using an established energy forecasting pipeline lifecycle as the reference structure. A total of 256 records were screened across vendor documentation, open-source repositories, and academic literature, of which 13 MLOps platforms were selected for comparative capability analysis. Platform capabilities are organised and presented across an end-to-end lifecycle covering project setup and governance, data ingestion and management, model development and experimentation, deployment and serving, and monitoring and feedback. Commercial platforms such as Amazon SageMaker and Google Vertex AI generally provide stronger end-to-end integration and production readiness, while open-source platforms such as Kubeflow and ClearML offer modular flexibility that typically requires additional integration effort to achieve end-to-end operation. The mapping identifies four priority areas where platform support remains limited, namely (i) governance workflow automation, (ii) automated data quality validation, (iii) feature management, and (iv) deployment and monitoring support under nonstationary conditions. These findings indicate that platform selection for energy forecasting should be treated as a lifecycle capability decision, balancing end-to-end integration, operational assurance, and long-term flexibility. Full article
Show Figures

Graphical abstract

18 pages, 1781 KB  
Article
Design and Characterisation of a Polyvinyl Chloride (PVC) Tissue-Mimicking Polymer Phantom for Quantitative Shear Wave Elastography Validation
by Wadhhah Aldehani, Sarah Louise Savaridas, Cheng Wei and Luigi Manfredi
Polymers 2026, 18(7), 797; https://doi.org/10.3390/polym18070797 - 26 Mar 2026
Viewed by 504
Abstract
A polyvinyl chloride (PVC)-based tissue-mimicking polymer phantom was developed and mechanically characterised to replicate stiffness ranges relevant to breast elastography and to provide a controlled platform for evaluating shear wave elastography (SWE) measurements. SWE provides quantitative stiffness information that complements B-mode ultrasound in [...] Read more.
A polyvinyl chloride (PVC)-based tissue-mimicking polymer phantom was developed and mechanically characterised to replicate stiffness ranges relevant to breast elastography and to provide a controlled platform for evaluating shear wave elastography (SWE) measurements. SWE provides quantitative stiffness information that complements B-mode ultrasound in breast imaging. However, measurement variability related to operator technique and tissue continues to limit confidence in clinical interpretation. This study evaluates the reproducibility of SWE using custom-fabricated PVC-based breast phantoms with mechanically defined stiffness properties. Two PVC-based breast phantoms with identical geometry and different background stiffnesses were scanned using a single ultrasound system under a fixed SWE protocol. Each phantom contained four embedded inclusions representing clinically relevant stiffness categories. Six breast imagers independently acquired repeated SWE measurements in transverse and longitudinal planes, blinded to lesion identity and ground truth. Inter-operator reproducibility was assessed using intraclass correlation coefficients, and was high across both phantom backgrounds, with low intra-operator variability following quality assurance exclusion of one dataset due to sampling error. Measurement variability was lowest for solid inclusions and increased for the cyst-like inclusion in the stiffer background. SWE measurements consistently preserved the relative stiffness ordering of inclusions, although absolute values differed systematically from mechanically derived ground-truth stiffness. These findings demonstrate that PVC-based polymer phantoms provide a stable and reproducible platform for evaluating SWE measurement behaviour under controlled conditions. By isolating operator and acquisition effects from biological variability, this polymer-based framework supports methodological standardisation and structured operator training in breast elastography. Full article
(This article belongs to the Special Issue Polymers for Biomedical Engineering and Clinical Innovation)
Show Figures

Figure 1

31 pages, 1592 KB  
Article
FORESIGHT: Software Defects Prediction from Requirements Change Requests Using Machine Learning Methods
by Hanan Helwa and Adel Taweel
Systems 2026, 14(4), 342; https://doi.org/10.3390/systems14040342 - 24 Mar 2026
Viewed by 504
Abstract
Software defect prediction is becoming key for software quality assurance. Traditional software defect prediction approaches have predominantly focused on analyzing code-level metrics, often overlooking valuable information available during the requirements phase. However, when a requirement change request (RCR) is issued, usually during the [...] Read more.
Software defect prediction is becoming key for software quality assurance. Traditional software defect prediction approaches have predominantly focused on analyzing code-level metrics, often overlooking valuable information available during the requirements phase. However, when a requirement change request (RCR) is issued, usually during the maintenance and evolution phase, predicting software defects provides an important preventative measure. Work in requirement-based software defect prediction methods typically focus on identifying requirement flaws, such as ambiguity or incompleteness, and fail to adequately predict defects that may manifest later in the operational software system. This paper proposes a context-driven representation model, named FORESIGHT, that predicts software defect types from requirements change requests using machine learning methods. The proposed model uses binary indicators to represent contextual metrics derived from change-request characteristics and supports multi-class prediction from both primary defect types and defect manifestation types. To build its representation model, three datasets were created from real-world industrial projects in different software domains (Web, Mobile, and ASRS). FORESIGHT was evaluated using Random Forest, XGBoost, and Gradient Boosting classifiers. Results show certain software defect types can be reliability predicted with Random Forest achieving the highest macro-F1 (0.815–0.873 for primary defect type prediction; 0.683–0.833 for defect manifestation prediction) across all three datasets, outperforming XGBoost and Gradient Boosting on every dataset–task combination. Findings show that contextual metrics from requirements change requests, structured within the FORESIGHT representation model, enable reliable pre-implementation prediction of specific defect types in deployed software systems. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

15 pages, 1495 KB  
Perspective
Artificial Intelligence in Higher Education: A Global Statistical Synthesis for Policy and Quality Assurance Reform
by Rima J. Isaifan
Educ. Sci. 2026, 16(3), 483; https://doi.org/10.3390/educsci16030483 - 20 Mar 2026
Viewed by 1184
Abstract
Artificial intelligence has transitioned from a peripheral innovation to a core infrastructure shaping higher education within a remarkably short period. While conceptual debates on AI ethics, pedagogy, and academic integrity are expanding, empirically grounded syntheses that consolidate global evidence remain limited. This study [...] Read more.
Artificial intelligence has transitioned from a peripheral innovation to a core infrastructure shaping higher education within a remarkably short period. While conceptual debates on AI ethics, pedagogy, and academic integrity are expanding, empirically grounded syntheses that consolidate global evidence remain limited. This study addresses this gap by providing an integrated cross-domain synthesis and statistically grounded overview of AI adoption, use, and governance across higher education systems. Using a secondary statistical synthesis methodology, the study aggregates large-scale quantitative data published between 2021 and 2025 from reputable international sources, including student and faculty surveys, institutional reports, research indices, and regulatory datasets. Results demonstrate near-universal student adoption of AI tools, rapid but uneven professional engagement among faculty and staff, a sharp rise in AI-related academic misconduct, accelerating impacts on research production and scientific workflows, and persistent gaps in institutional preparedness, policy development, and equity. The findings reveal a widening disconnect between bottom-up AI adoption and top-down governance mechanisms, particularly in assessment design, academic integrity frameworks, faculty capacity building, and quality assurance systems. Moreover, this paper argues that AI can no longer be treated as an optional educational technology and must instead be governed as a foundational component of higher education infrastructure. The study concludes by outlining evidence-based policy implications for institutions, regulators, and quality assurance agencies, emphasizing the need for coordinated, adaptive, and equity-oriented governance frameworks grounded in empirical realities rather than speculative narratives. Full article
(This article belongs to the Topic Explainable AI in Education)
Show Figures

Figure 1

Back to TopTop