Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (419)

Search Parameters:
Keywords = learning assurance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 1194 KiB  
Review
Transforming Data Annotation with AI Agents: A Review of Architectures, Reasoning, Applications, and Impact
by Md Monjurul Karim, Sangeen Khan, Dong Hoang Van, Xinyue Liu, Chunhui Wang and Qiang Qu
Future Internet 2025, 17(8), 353; https://doi.org/10.3390/fi17080353 (registering DOI) - 2 Aug 2025
Abstract
Data annotation serves as a critical foundation for artificial intelligence (AI) and machine learning (ML). Recently, AI agents powered by large language models (LLMs) have emerged as effective solutions to longstanding challenges in data annotation, such as scalability, consistency, cost, and limitations in [...] Read more.
Data annotation serves as a critical foundation for artificial intelligence (AI) and machine learning (ML). Recently, AI agents powered by large language models (LLMs) have emerged as effective solutions to longstanding challenges in data annotation, such as scalability, consistency, cost, and limitations in domain expertise. These agents facilitate intelligent automation and adaptive decision-making, thereby enhancing the efficiency and reliability of annotation workflows across various fields. Despite the growing interest in this area, a systematic understanding of the role and capabilities of AI agents in annotation is still underexplored. This paper seeks to fill that gap by providing a comprehensive review of how LLM-driven agents support advanced reasoning strategies, adaptive learning, and collaborative annotation efforts. We analyze agent architectures, integration patterns within workflows, and evaluation methods, along with real-world applications in sectors such as healthcare, finance, technology, and media. Furthermore, we evaluate current tools and platforms that support agent-based annotation, addressing key challenges such as quality assurance, bias mitigation, transparency, and scalability. Lastly, we outline future research directions, highlighting the importance of federated learning, cross-modal reasoning, and responsible system design to advance the development of next-generation annotation ecosystems. Full article
Show Figures

Figure 1

24 pages, 1686 KiB  
Review
Data-Driven Predictive Modeling for Investigating the Impact of Gear Manufacturing Parameters on Noise Levels in Electric Vehicle Drivetrains
by Krisztián Horváth
World Electr. Veh. J. 2025, 16(8), 426; https://doi.org/10.3390/wevj16080426 - 30 Jul 2025
Viewed by 186
Abstract
Reducing gear noise in electric vehicle (EV) drivetrains is crucial due to the absence of internal combustion engine noise, making even minor acoustic disturbances noticeable. Manufacturing parameters significantly influence gear-generated noise, yet traditional analytical methods often fail to predict these complex relationships accurately. [...] Read more.
Reducing gear noise in electric vehicle (EV) drivetrains is crucial due to the absence of internal combustion engine noise, making even minor acoustic disturbances noticeable. Manufacturing parameters significantly influence gear-generated noise, yet traditional analytical methods often fail to predict these complex relationships accurately. This research addresses this gap by introducing a data-driven approach using machine learning (ML) to predict gear noise levels from manufacturing and sensor-derived data. The presented methodology encompasses systematic data collection from various production stages—including soft and hard machining, heat treatment, honing, rolling tests, and end-of-line (EOL) acoustic measurements. Predictive models employing Random Forest, Gradient Boosting (XGBoost), and Neural Network algorithms were developed and compared to traditional statistical approaches. The analysis identified critical manufacturing parameters, such as surface waviness, profile errors, and tooth geometry deviations, significantly influencing noise generation. Advanced ML models, specifically Random Forest, XGBoost, and deep neural networks, demonstrated superior prediction accuracy, providing early-stage identification of gear units likely to exceed acceptable noise thresholds. Integrating these data-driven models into manufacturing processes enables early detection of potential noise issues, reduces quality assurance costs, and supports sustainable manufacturing by minimizing prototype production and resource consumption. This research enhances the understanding of gear noise formation and offers practical solutions for real-time quality assurance. Full article
Show Figures

Graphical abstract

17 pages, 5455 KiB  
Article
A Hybrid Deep Learning Architecture for Enhanced Vertical Wind and FBAR Estimation in Airborne Radar Systems
by Fusheng Hou and Guanghui Sun
Aerospace 2025, 12(8), 679; https://doi.org/10.3390/aerospace12080679 - 30 Jul 2025
Viewed by 194
Abstract
Accurate prediction of the F-factor averaged over one kilometer (FBAR), a critical wind shear metric, is essential for aviation safety. A central F-factor is used to compute FBAR. i.e., compute the value of FBAR at a point using a spatial [...] Read more.
Accurate prediction of the F-factor averaged over one kilometer (FBAR), a critical wind shear metric, is essential for aviation safety. A central F-factor is used to compute FBAR. i.e., compute the value of FBAR at a point using a spatial interval beginning 500 m prior to the point and ending 500 m beyond the point. Traditional FBAR estimation using the Vicroy method suffers from limited vertical wind speed (W,h) accuracy, particularly in complex, non-idealized atmospheric conditions. This foundational study proposes a hybrid CNN-BiLSTM-Attention deep learning architecture that integrates spatial feature extraction, sequential dependency modeling, and attention mechanisms to address this limitation. The model was trained and evaluated on data generated by the industry-standard Airborne Doppler Weather Radar Simulation (ADWRS) system, using the DFW microburst case (C1-11) as a benchmark hazardous scenario. Following safety assurance principles aligned with SAE AS6983, the proposed model achieved a W,h estimation RMSE (root-mean-squared deviation) of 0.623 m s1 (vs. Vicroy’s 14.312 m s1) and a correlation of 0.974 on 14,524 test points. This subsequently improved FBAR prediction RMSE by 98.5% (0.0591 vs. 4.0535) and MAE (Mean Absolute Error) by 96.1% (0.0434 vs. 1.1101) compared to Vicroy-derived values. The model demonstrated a 65.3% probability of detection for hazardous downdrafts with a low 1.7% false alarm rate. These results, obtained in a controlled and certifiable simulation environment, highlight deep learning’s potential to enhance the reliability of airborne wind shear detection for civil aircraft, paving the way for next-generation intelligent weather avoidance systems. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

28 pages, 2918 KiB  
Article
Machine Learning-Powered KPI Framework for Real-Time, Sustainable Ship Performance Management
by Christos Spandonidis, Vasileios Iliopoulos and Iason Athanasopoulos
J. Mar. Sci. Eng. 2025, 13(8), 1440; https://doi.org/10.3390/jmse13081440 - 28 Jul 2025
Viewed by 298
Abstract
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics [...] Read more.
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics is at an emerging state. This paper proposes a machine learning-driven framework for real-time ship performance management. The framework starts with data collected from onboard sensors and culminates in a decision support system that is easily interpretable, even by non-experts. It also provides a method to forecast vessel performance by extrapolating Key Performance Indicator (KPI) values. Furthermore, it offers a flexible methodology for defining KPIs for every crucial component or aspect of vessel performance, illustrated through a use case focusing on fuel oil consumption. Leveraging Artificial Neural Networks (ANNs), hybrid multivariate data fusion, and high-frequency sensor streams, the system facilitates continuous diagnostics, early fault detection, and data-driven decision-making. Unlike conventional static performance models, the framework employs dynamic KPIs that evolve with the vessel’s operational state, enabling advanced trend analysis, predictive maintenance scheduling, and compliance assurance. Experimental comparison against classical KPI models highlights superior predictive fidelity, robustness, and temporal consistency. Furthermore, the paper delineates AI and ML applications across core maritime operations and introduces a scalable, modular system architecture applicable to both commercial and naval platforms. This approach bridges advanced simulation ecosystems with in situ operational data, laying a robust foundation for digital transformation and sustainability in maritime domains. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

15 pages, 1006 KiB  
Article
Framework for a Modular Emergency Departments Registry: A Case Study of the Tasmanian Emergency Care Outcomes Registry (TECOR)
by Viet Tran, Lauren Thurlow, Simone Page and Giles Barrington
Hospitals 2025, 2(3), 18; https://doi.org/10.3390/hospitals2030018 - 23 Jul 2025
Viewed by 222
Abstract
Background: The emergency department (ED) often represents the entry point to care for patients that require urgent medical attention or have no alternative for medical treatment. This has implications on scope of practice and how quality of care is measured. A diverse [...] Read more.
Background: The emergency department (ED) often represents the entry point to care for patients that require urgent medical attention or have no alternative for medical treatment. This has implications on scope of practice and how quality of care is measured. A diverse array of methodologies has been developed to evaluate the quality of clinical care and broadly includes quality improvement (QI), quality assurance (QA), observational research (OR) and clinical quality registries (CQRs). Considering the overlap between QI, QA, OR and CQRs, we conceptualized a modular framework for TECOR to effectively and efficiently streamline clinical quality evaluations. Streamlining is both appropriate and justified as it reduces redundancy, enhances clarity and optimizes resource utilization, thereby allowing clinicians to focus on delivering high-quality patient care without being overwhelmed by excessive data and procedural complexities. The objective of this study is to describe the process for designing a modular framework for ED CQRs using TECOR as a case study. Methods: We performed a scoping audit of all quality projects performed in our ED over a 1-year period (1 January 2021 to 31 December 2021) as well as data mapping and categorical formulation of key themes from the TECOR dataset with clinical data sources. Both these processes then informed the design of TECOR. Results: For the audit of quality projects, we identified 29 projects. The quality evaluation methodologies for these projects included 12 QI projects, 5 CQRs and 12 OR projects. Data mapping identified that clinical information was fragmented across 11 distinct data sources. Through thematic analysis during data mapping, we identified three extraction techniques: self-extractable, manual entry and on request. Conclusions: The modular framework for TECOR aims to enable an efficient streamlined approach that caters to all aspects of clinical quality evaluation to enable higher throughput of clinician-led quality evaluations and improvements. TECOR is also an essential component in the development of a learning health system to drive evidence-based practice and the subject of future research. Full article
Show Figures

Figure 1

18 pages, 4165 KiB  
Article
Localization and Pixel-Confidence Network for Surface Defect Segmentation
by Yueyou Wang, Zixuan Xu, Li Mei, Ruiqing Guo, Jing Zhang, Tingbo Zhang and Hongqi Liu
Sensors 2025, 25(15), 4548; https://doi.org/10.3390/s25154548 - 23 Jul 2025
Viewed by 215
Abstract
Surface defect segmentation based on deep learning has been widely applied in industrial inspection. However, two major challenges persist in specific application scenarios: first, the imbalanced area distribution between defects and the background leads to degraded segmentation performance; second, fine gaps within defects [...] Read more.
Surface defect segmentation based on deep learning has been widely applied in industrial inspection. However, two major challenges persist in specific application scenarios: first, the imbalanced area distribution between defects and the background leads to degraded segmentation performance; second, fine gaps within defects are prone to over-segmentation. To address these issues, this study proposes a two-stage image segmentation network that integrates a Defect Localization Module and a Pixel Confidence Module. In the first stage, the Defect Localization Module performs a coarse localization of defect regions and embeds the resulting feature vectors into the backbone of the second stage. In the second stage, the Pixel Confidence Module captures the probabilistic distribution of neighboring pixels, thereby refining the initial predictions. Experimental results demonstrate that the improved network achieves gains of 1.58%±0.80% in mPA, 1.35%±0.77% in mIoU on the self-built Carbon Fabric Defect Dataset and 2.66%±1.12% in mPA, 1.44%±0.79% in mIoU on the public Magnetic Tile Defect Dataset compared to the other network. These enhancements translate to more reliable automated quality assurance in industrial production environments. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

21 pages, 2742 KiB  
Article
Origin Traceability of Chinese Mitten Crab (Eriocheir sinensis) Using Multi-Stable Isotopes and Explainable Machine Learning
by Danhe Wang, Chunxia Yao, Yangyang Lu, Di Huang, Yameng Li, Xugan Wu, Weiguo Song and Qinxiong Rao
Foods 2025, 14(14), 2458; https://doi.org/10.3390/foods14142458 - 13 Jul 2025
Viewed by 328
Abstract
The Chinese mitten crab (Eriocheir sinensis) industry is currently facing the challenges of origin fraud, as well as a lack of precision and interpretability of existing traceability methods. Here, we propose a high-precision origin traceability method based on a combination of [...] Read more.
The Chinese mitten crab (Eriocheir sinensis) industry is currently facing the challenges of origin fraud, as well as a lack of precision and interpretability of existing traceability methods. Here, we propose a high-precision origin traceability method based on a combination of stable isotope analysis and interpretable machine learning. We sampled Chinese mitten crabs from six origins representing diverse aquatic environments and farming practices, and analyzed their δ13C, δ15N, δ2H, and δ18O stable isotope compositions in different sexes and tissues (hepatopancreas, muscle, and gonad). By comparing the classification performance of Random Forest, XGBoost, and Logistic Regression models, we found that the Random Forest model outperformed the others, achieving high accuracy (91.3%) in distinguishing samples from different origins. Interpretation of the optimal Random Forest model, using SHAP (SHapley Additive exPlanations) analysis, identified δ2H in male muscle, δ15N in female hepatopancreas, and δ13C in female hepatopancreas as the most influential features for discriminating geographic origin. This analysis highlighted the crucial role of environmental factors, such as water source, diet, and trophic level, in origin discrimination and demonstrated that isotopic characteristics of different tissues provide unique discriminatory information. This study offers a novel paradigm for stable isotope traceability based on explainable machine learning, significantly enhancing the identification capability and reliability of Chinese mitten crab origin traceability, and holds significant implications for food safety assurance. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

18 pages, 4066 KiB  
Article
Video Segmentation of Wire + Arc Additive Manufacturing (WAAM) Using Visual Large Model
by Shuo Feng, James Wainwright, Chong Wang, Jun Wang, Goncalo Rodrigues Pardal, Jian Qin, Yi Yin, Shakirudeen Lasisi, Jialuo Ding and Stewart Williams
Sensors 2025, 25(14), 4346; https://doi.org/10.3390/s25144346 - 11 Jul 2025
Viewed by 302
Abstract
Process control and quality assurance of wire + arc additive manufacturing (WAAM) and automated welding rely heavily on in-process monitoring videos to quantify variables such as melt pool geometry, location and size of droplet transfer, arc characteristics, etc. To enable feedback control based [...] Read more.
Process control and quality assurance of wire + arc additive manufacturing (WAAM) and automated welding rely heavily on in-process monitoring videos to quantify variables such as melt pool geometry, location and size of droplet transfer, arc characteristics, etc. To enable feedback control based upon this information, an automatic and robust segmentation method for monitoring of videos and images is required. However, video segmentation in WAAM and welding is challenging due to constantly fluctuating arc brightness, which varies with deposition and welding configurations. Additionally, conventional computer vision algorithms based on greyscale value and gradient lack flexibility and robustness in this scenario. Deep learning offers a promising approach to WAAM video segmentation; however, the prohibitive time and cost associated with creating a well-labelled, suitably sized dataset have hindered its widespread adoption. The emergence of large computer vision models, however, has provided new solutions. In this study a semi-automatic annotation tool for WAAM videos was developed based upon the computer vision foundation model SAM and the video object tracking model XMem. The tool can enable annotation of the video frames hundreds of times faster than traditional manual annotation methods, thus making it possible to achieve rapid quantitative analysis of WAAM and welding videos with minimal user intervention. To demonstrate the effectiveness of the tool, three cases are demonstrated: online wire position closed-loop control, droplet transfer behaviour analysis, and assembling a dataset for dedicated deep learning segmentation models. This work provides a broader perspective on how to exploit large models in WAAM and weld deposits. Full article
(This article belongs to the Special Issue Sensing and Imaging in Computer Vision)
Show Figures

Figure 1

45 pages, 2126 KiB  
Review
An Overview of Autonomous Parking Systems: Strategies, Challenges, and Future Directions
by Javier Santiago Olmos Medina, Jessica Gissella Maradey Lázaro, Anton Rassõlkin and Hernán González Acuña
Sensors 2025, 25(14), 4328; https://doi.org/10.3390/s25144328 - 10 Jul 2025
Cited by 1 | Viewed by 593
Abstract
Autonomous Parking Systems (APSs) are rapidly evolving, promising enhanced convenience, safety, and efficiency. This review critically examines the current strategies in perception, path planning, and vehicle control, alongside system-level aspects like integration, validation, and security. While significant progress has been made, particularly with [...] Read more.
Autonomous Parking Systems (APSs) are rapidly evolving, promising enhanced convenience, safety, and efficiency. This review critically examines the current strategies in perception, path planning, and vehicle control, alongside system-level aspects like integration, validation, and security. While significant progress has been made, particularly with the advent of deep learning and sophisticated sensor fusion, formidable challenges persist. This paper delves into the inherent trade-offs, such as balancing computational cost with real-time performance demands; unresolved foundational issues, including the verification of non-deterministic AI components; and the profound difficulty of ensuring robust real-world deployment across diverse and unpredictable conditions, ranging from cluttered urban canyons to poorly lit, ambiguously marked parking structures. We also explore the limitations of current technologies, the complexities of safety assurance in dynamic environments, the pervasive impact of cost considerations on system capabilities, and the critical, often underestimated, need for genuine user trust. Future research must address not only these technological gaps with innovative solutions but also the intricate socio-technical dimensions to realize the full potential of APS. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

18 pages, 4458 KiB  
Article
Intelligent Hybrid SHM-NDT Approach for Structural Assessment of Metal Components
by Romaine Byfield, Ahmed Shabaka, Milton Molina Vargas and Ibrahim Tansel
Infrastructures 2025, 10(7), 174; https://doi.org/10.3390/infrastructures10070174 - 6 Jul 2025
Viewed by 366
Abstract
Structural health monitoring (SHM) plays a pivotal role in ensuring the integrity and safety of critical infrastructure and mechanical components. While traditional non-destructive testing (NDT) methods offer high-resolution data, they typically require periodic access and disassembly of equipment to conduct inspections. In contrast, [...] Read more.
Structural health monitoring (SHM) plays a pivotal role in ensuring the integrity and safety of critical infrastructure and mechanical components. While traditional non-destructive testing (NDT) methods offer high-resolution data, they typically require periodic access and disassembly of equipment to conduct inspections. In contrast, SHM employs permanently installed, cost-effective sensors to enable continuous monitoring, though often with reduced detail. This study presents an integrated hybrid SHM-NDT methodology enhanced by deep learning to enable the real-time monitoring and classification of mechanical stresses in structural components. As a case study, a 6-foot-long parallel flange I-beam, representing bridge truss elements, was subjected to variable bending loads to simulate operational conditions. The hybrid system utilized an ultrasonic transducer (NDT) for excitation and piezoelectric sensors (SHM) for signal acquisition. Signal data were analyzed using 1D and 2D convolutional neural networks (CNNs), long short-term memory (LSTM) models, and random forest classifiers to detect and classify load magnitudes. The AI-enhanced approach achieved 100% accuracy in 47 out of 48 tests and 94% in the remaining tests. These results demonstrate that the hybrid SHM-NDT framework, combined with machine learning, offers a powerful and adaptable solution for continuous monitoring and precise damage assessment of structural systems, significantly advancing maintenance practices and safety assurance. Full article
Show Figures

Figure 1

22 pages, 2705 KiB  
Article
Applying Reinforcement Learning to Protect Deep Neural Networks from Soft Errors
by Peng Su, Yuhang Li, Zhonghai Lu and Dejiu Chen
Sensors 2025, 25(13), 4196; https://doi.org/10.3390/s25134196 - 5 Jul 2025
Viewed by 544
Abstract
With the advance of Artificial Intelligence, Deep Neural Networks are widely employed in various sensor-based systems to analyze operational conditions. However, due to the inherently nondeterministic and probabilistic natures of neural networks, the assurance of overall system performance could become a challenging task. [...] Read more.
With the advance of Artificial Intelligence, Deep Neural Networks are widely employed in various sensor-based systems to analyze operational conditions. However, due to the inherently nondeterministic and probabilistic natures of neural networks, the assurance of overall system performance could become a challenging task. In particular, soft errors could weaken the robustness of such networks and thereby threaten the system’s safety. Conventional fault-tolerant techniques by means of hardware redundancy and software correction mechanisms often involve a tricky trade-off between effectiveness and scalability in addressing the extensive design space of Deep Neural Networks. In this work, we propose a Reinforcement-Learning-based approach to protect neural networks from soft errors by addressing and identifying the vulnerable bits. The approach consists of three key steps: (1) analyzing layer-wise resiliency of Deep Neural Networks by a fault injection simulation; (2) generating layer-wise bit masks by a Reinforcement-Learning-based agent to reveal the vulnerable bits and to protect against them; and (3) synthesizing and deploying bit masks across the network with guaranteed operation efficiency by adopting transfer learning. As a case study, we select several existing neural networks to test and validate the design. The performance of the proposed approach is compared with the performance of other baseline methods, including Hamming code and the Most Significant Bits protection schemes. The results indicate that the proposed method exhibits a significant improvement. Specifically, we observe that the proposed method achieves a significant performance gain of at least 10% to 15% over on the test network. The results indicate that the proposed method dynamically and efficiently protects the vulnerable bits compared with the baseline methods. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

19 pages, 2465 KiB  
Article
WDNET-YOLO: Enhanced Deep Learning for Structural Timber Defect Detection to Improve Building Safety and Reliability
by Xiaoxia Lin, Weihao Gong, Lin Sun, Xiaodong Yang, Chunwei Leng, Yan Li, Zhenyu Niu, Yingzhou Meng, Xinyue Xiao and Junyan Zhang
Buildings 2025, 15(13), 2281; https://doi.org/10.3390/buildings15132281 - 28 Jun 2025
Viewed by 476
Abstract
Structural timber is an important building material, but surface defects such as cracks and knots seriously affect its load-bearing capacity, dimensional stability, and long-term durability, posing a significant risk to structural safety. Conventional inspection methods are unable to address the issues of multi-scale [...] Read more.
Structural timber is an important building material, but surface defects such as cracks and knots seriously affect its load-bearing capacity, dimensional stability, and long-term durability, posing a significant risk to structural safety. Conventional inspection methods are unable to address the issues of multi-scale defect characterization, inter-class confusion, and morphological diversity, thus limiting reliable construction quality assurance. To overcome these challenges, this study proposes WDNET-YOLO: an enhanced deep learning model based on YOLOv8n for high-precision defect detection in structural wood. First, the RepVGG reparameterized backbone utilizes multi-branch training to capture critical defect features (e.g., distributed cracks and dense clusters of knots) across scales. Second, the ECA attention mechanism dynamically suppresses complex wood grain interference and enhances the discriminative feature representation between high-risk defect classes (e.g., cracks vs. knots). Finally, CARAFE up-sampling with adaptive contextual reorganization improves the sensitivity to morphologically variable defects (e.g., fine cracks and resin irregularities). The analysis results show that the mAP50 and mAP50-95 of WDNET-YOLO are improved by 3.7% and 3.5%, respectively, compared to YOLOv8n, while the parameters are increased by only 4.4%. The model provides a powerful solution for automated structural timber inspection, which directly improves building safety and reliability by preventing failures caused by defects, optimizing material utilization, and supporting compliance with building quality standards. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

26 pages, 8949 KiB  
Article
Real-Time Detection of Hole-Type Defects on Industrial Components Using Raspberry Pi 5
by Mehmet Deniz, Ismail Bogrekci and Pinar Demircioglu
Appl. Syst. Innov. 2025, 8(4), 89; https://doi.org/10.3390/asi8040089 - 27 Jun 2025
Viewed by 675
Abstract
In modern manufacturing, ensuring quality control for geometric features is critical, yet detecting anomalies in circular components remains underexplored. This study proposes a real-time defect detection framework for metal parts with holes, optimized for deployment on a Raspberry Pi 5 edge device. We [...] Read more.
In modern manufacturing, ensuring quality control for geometric features is critical, yet detecting anomalies in circular components remains underexplored. This study proposes a real-time defect detection framework for metal parts with holes, optimized for deployment on a Raspberry Pi 5 edge device. We fine-tuned and evaluated three deep learning models ResNet50, EfficientNet-B3, and MobileNetV3-Large on a grayscale image dataset (43,482 samples) containing various hole defects and imbalances. Through extensive data augmentation and class-weighting, the models achieved near-perfect binary classification of defective vs. non-defective parts. Notably, ResNet50 attained 99.98% accuracy (precision 0.9994, recall 1.0000), correctly identifying all defects with only one false alarm. MobileNetV3-Large and EfficientNet-B3 likewise exceeded 99.9% accuracy, with slightly more false positives, but offered advantages in model size or interpretability. Gradient-weighted Class Activation Mapping (Grad-CAM) visualizations confirmed that each network focuses on meaningful geometric features (misaligned or irregular holes) when predicting defects, enhancing explainability. These results demonstrate that lightweight CNNs can reliably detect geometric deviations (e.g., mispositioned or missing holes) in real time. The proposed system significantly improves inline quality assurance by enabling timely, accurate, and interpretable defect detection on low-cost hardware, paving the way for smarter manufacturing inspection. Full article
Show Figures

Figure 1

20 pages, 7094 KiB  
Article
Adaptive Warning Thresholds for Dam Safety: A KDE-Based Approach
by Nathalia Silva-Cancino, Fernando Salazar, Joaquín Irazábal and Juan Mata
Infrastructures 2025, 10(7), 158; https://doi.org/10.3390/infrastructures10070158 - 26 Jun 2025
Viewed by 348
Abstract
Dams are critical infrastructures that provide essential services such as water supply, hydroelectric power generation, and flood control. As many dams age, the risk of structural failure increases, making safety assurance more urgent than ever. Traditional monitoring systems typically employ predictive models—based on [...] Read more.
Dams are critical infrastructures that provide essential services such as water supply, hydroelectric power generation, and flood control. As many dams age, the risk of structural failure increases, making safety assurance more urgent than ever. Traditional monitoring systems typically employ predictive models—based on techniques such as the finite element method (FEM) or machine learning (ML)—to compare real-time data against expected performance. However, these models often rely on static warning thresholds, which fail to reflect the dynamic conditions affecting dam behavior, including fluctuating water levels, temperature variations, and extreme weather events. This study introduces an adaptive warning threshold methodology for dam safety based on kernel density estimation (KDE). The approach incorporates a boosted regression tree (BRT) model for predictive analysis, identifying influential variables such as reservoir levels and ambient temperatures. KDE is then used to estimate the density of historical data, allowing for dynamic calibration of warning thresholds. In regions of low data density—where prediction uncertainty is higher—the thresholds are widened to reduce false alarms, while in high-density regions, stricter thresholds are maintained to preserve sensitivity. The methodology was validated using data from an arch dam, demonstrating improved anomaly detection capabilities. It successfully reduced false positives in data-sparse conditions while maintaining high sensitivity to true anomalies in denser data regions. These results confirm that the proposed methodology successfully meets the goals of enhancing reliability and adaptability in dam safety monitoring. This adaptive framework offers a robust enhancement to dam safety monitoring systems, enabling more reliable detection of structural issues under variable operating conditions. Full article
(This article belongs to the Special Issue Preserving Life Through Dams)
Show Figures

Figure 1

24 pages, 2258 KiB  
Article
Machine Learning for Anomaly Detection in Blockchain: A Critical Analysis, Empirical Validation, and Future Outlook
by Fouzia Jumani and Muhammad Raza
Computers 2025, 14(7), 247; https://doi.org/10.3390/computers14070247 - 25 Jun 2025
Viewed by 1044
Abstract
Blockchain technology has transformed how data are stored and transactions are processed in a distributed environment. Blockchain assures data integrity by validating transactions through the consensus of a distributed ledger involving several miners as validators. Although blockchain provides multiple advantages, it has also [...] Read more.
Blockchain technology has transformed how data are stored and transactions are processed in a distributed environment. Blockchain assures data integrity by validating transactions through the consensus of a distributed ledger involving several miners as validators. Although blockchain provides multiple advantages, it has also been subject to some malicious attacks, such as a 51% attack, which is considered a potential risk to data integrity. These attacks can be detected by analyzing the anomalous node behavior of miner nodes in the network, and data analysis plays a vital role in detecting and overcoming these attacks to make a secure blockchain. Integrating machine learning algorithms with blockchain has become a significant approach to detecting anomalies such as a 51% attack and double spending. This study comprehensively analyzes various machine learning (ML) methods to detect anomalies in blockchain networks. It presents a Systematic Literature Review (SLR) and a classification to explore the integration of blockchain and ML for anomaly detection in blockchain networks. We implemented Random Forest, AdaBoost, XGBoost, K-means, and Isolation Forest ML models to evaluate their performance in detecting Blockchain anomalies, such as a 51% attack. Additionally, we identified future research directions, including challenges related to scalability, network latency, imbalanced datasets, the dynamic nature of anomalies, and the lack of standardization in blockchain protocols. This study acts as a benchmark for additional research on how ML algorithms identify anomalies in blockchain technology and aids ongoing studies in this rapidly evolving field. Full article
Show Figures

Figure 1

Back to TopTop