Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (535)

Search Parameters:
Keywords = industrial quality inspection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 1435 KiB  
Review
Smart Safety Helmets with Integrated Vision Systems for Industrial Infrastructure Inspection: A Comprehensive Review of VSLAM-Enabled Technologies
by Emmanuel A. Merchán-Cruz, Samuel Moveh, Oleksandr Pasha, Reinis Tocelovskis, Alexander Grakovski, Alexander Krainyukov, Nikita Ostrovenecs, Ivans Gercevs and Vladimirs Petrovs
Sensors 2025, 25(15), 4834; https://doi.org/10.3390/s25154834 - 6 Aug 2025
Abstract
Smart safety helmets equipped with vision systems are emerging as powerful tools for industrial infrastructure inspection. This paper presents a comprehensive state-of-the-art review of such VSLAM-enabled (Visual Simultaneous Localization and Mapping) helmets. We surveyed the evolution from basic helmet cameras to intelligent, sensor-fused [...] Read more.
Smart safety helmets equipped with vision systems are emerging as powerful tools for industrial infrastructure inspection. This paper presents a comprehensive state-of-the-art review of such VSLAM-enabled (Visual Simultaneous Localization and Mapping) helmets. We surveyed the evolution from basic helmet cameras to intelligent, sensor-fused inspection platforms, highlighting how modern helmets leverage real-time visual SLAM algorithms to map environments and assist inspectors. A systematic literature search was conducted targeting high-impact journals, patents, and industry reports. We classify helmet-integrated camera systems into monocular, stereo, and omnidirectional types and compare their capabilities for infrastructure inspection. We examine core VSLAM algorithms (feature-based, direct, hybrid, and deep-learning-enhanced) and discuss their adaptation to wearable platforms. Multi-sensor fusion approaches integrating inertial, LiDAR, and GNSS data are reviewed, along with edge/cloud processing architectures enabling real-time performance. This paper compiles numerous industrial use cases, from bridges and tunnels to plants and power facilities, demonstrating significant improvements in inspection efficiency, data quality, and worker safety. Key challenges are analyzed, including technical hurdles (battery life, processing limits, and harsh environments), human factors (ergonomics, training, and cognitive load), and regulatory issues (safety certification and data privacy). We also identify emerging trends, such as semantic SLAM, AI-driven defect recognition, hardware miniaturization, and collaborative multi-helmet systems. This review finds that VSLAM-equipped smart helmets offer a transformative approach to infrastructure inspection, enabling real-time mapping, augmented awareness, and safer workflows. We conclude by highlighting current research gaps, notably in standardizing systems and integrating with asset management, and provide recommendations for industry adoption and future research directions. Full article
Show Figures

Figure 1

36 pages, 1832 KiB  
Review
Enabling Intelligent Industrial Automation: A Review of Machine Learning Applications with Digital Twin and Edge AI Integration
by Mohammad Abidur Rahman, Md Farhan Shahrior, Kamran Iqbal and Ali A. Abushaiba
Automation 2025, 6(3), 37; https://doi.org/10.3390/automation6030037 - 5 Aug 2025
Abstract
The integration of machine learning (ML) into industrial automation is fundamentally reshaping how manufacturing systems are monitored, inspected, and optimized. By applying machine learning to real-time sensor data and operational histories, advanced models enable proactive fault prediction, intelligent inspection, and dynamic process control—directly [...] Read more.
The integration of machine learning (ML) into industrial automation is fundamentally reshaping how manufacturing systems are monitored, inspected, and optimized. By applying machine learning to real-time sensor data and operational histories, advanced models enable proactive fault prediction, intelligent inspection, and dynamic process control—directly enhancing system reliability, product quality, and efficiency. This review explores the transformative role of ML across three key domains: Predictive Maintenance (PdM), Quality Control (QC), and Process Optimization (PO). It also analyzes how Digital Twin (DT) and Edge AI technologies are expanding the practical impact of ML in these areas. Our analysis reveals a marked rise in deep learning, especially convolutional and recurrent architectures, with a growing shift toward real-time, edge-based deployment. The paper also catalogs the datasets used, the tools and sensors employed for data collection, and the industrial software platforms supporting ML deployment in practice. This review not only maps the current research terrain but also highlights emerging opportunities in self-learning systems, federated architectures, explainable AI, and themes such as self-adaptive control, collaborative intelligence, and autonomous defect diagnosis—indicating that ML is poised to become deeply embedded across the full spectrum of industrial operations in the coming years. Full article
(This article belongs to the Section Industrial Automation and Process Control)
Show Figures

Figure 1

29 pages, 1407 KiB  
Article
Symmetry-Driven Two-Population Collaborative Differential Evolution for Parallel Machine Scheduling in Lace Dyeing with Probabilistic Re-Dyeing Operations
by Jing Wang, Jingsheng Lian, Youpeng Deng, Lang Pan, Huan Xue, Yanming Chen, Debiao Li, Xixing Li and Deming Lei
Symmetry 2025, 17(8), 1243; https://doi.org/10.3390/sym17081243 - 5 Aug 2025
Viewed by 1
Abstract
In lace textile manufacturing, the dyeing process in parallel machine environments faces challenges from sequence-dependent setup times due to color family transitions, machine eligibility constraints based on weight capacities, and probabilistic re-dyeing operations arising from quality inspection failures, which often lead to increased [...] Read more.
In lace textile manufacturing, the dyeing process in parallel machine environments faces challenges from sequence-dependent setup times due to color family transitions, machine eligibility constraints based on weight capacities, and probabilistic re-dyeing operations arising from quality inspection failures, which often lead to increased tardiness. To tackle this multi-constrained problem, a stochastic integer programming model is formulated to minimize total estimated tardiness. A novel symmetry-driven two-population collaborative differential evolution (TCDE) algorithm is then proposed. It features two symmetrically complementary subpopulations that achieve a balance between global exploration and local exploitation. One subpopulation employs chaotic parameter adaptation through a logistic map for symmetrically enhanced exploration, while the other adjusts parameters based on population diversity and convergence speed to facilitate symmetry-aware exploitation. Moreover, it also incorporates a symmetrical collaborative mechanism that includes the periodic migration of top individuals between subpopulations, along with elite-set guidance, to enhance both population diversity and convergence efficiency. Extensive computational experiments were conducted on 21 small-scale (optimally validated via CVX) and 15 large-scale synthetic datasets, as well as 21 small-scale (similarly validated) and 20 large-scale industrial datasets. These experiments demonstrate that TCDE significantly outperforms state-of-the-art comparative methods. Ablation studies also further verify the critical role of its symmetry-based components, with computational results confirming its superiority in solving the considered problem. Full article
(This article belongs to the Special Issue Meta-Heuristics for Manufacturing Systems Optimization, 3rd Edition)
Show Figures

Figure 1

31 pages, 4141 KiB  
Article
Automated Quality Control of Candle Jars via Anomaly Detection Using OCSVM and CNN-Based Feature Extraction
by Azeddine Mjahad and Alfredo Rosado-Muñoz
Mathematics 2025, 13(15), 2507; https://doi.org/10.3390/math13152507 - 4 Aug 2025
Viewed by 160
Abstract
Automated quality control plays a critical role in modern industries, particularly in environments that handle large volumes of packaged products requiring fast, accurate, and consistent inspections. This work presents an anomaly detection system for candle jars commonly used in industrial and commercial applications, [...] Read more.
Automated quality control plays a critical role in modern industries, particularly in environments that handle large volumes of packaged products requiring fast, accurate, and consistent inspections. This work presents an anomaly detection system for candle jars commonly used in industrial and commercial applications, where obtaining labeled defective samples is challenging. Two anomaly detection strategies are explored: (1) a baseline model using convolutional neural networks (CNNs) as an end-to-end classifier and (2) a hybrid approach where features extracted by CNNs are fed into One-Class classification (OCC) algorithms, including One-Class SVM (OCSVM), One-Class Isolation Forest (OCIF), One-Class Local Outlier Factor (OCLOF), One-Class Elliptic Envelope (OCEE), One-Class Autoencoder (OCAutoencoder), and Support Vector Data Description (SVDD). Both strategies are trained primarily on non-defective samples, with only a limited number of anomalous examples used for evaluation. Experimental results show that both the pure CNN model and the hybrid methods achieve excellent classification performance. The end-to-end CNN reached 100% accuracy, precision, recall, F1-score, and AUC. The best-performing hybrid model CNN-based feature extraction followed by OCIF also achieved 100% across all evaluation metrics, confirming the effectiveness and robustness of the proposed approach. Other OCC algorithms consistently delivered strong results, with all metrics above 95%, indicating solid generalization from predominantly normal data. This approach demonstrates strong potential for quality inspection tasks in scenarios with scarce defective data. Its ability to generalize effectively from mostly normal samples makes it a practical and valuable solution for real-world industrial inspection systems. Future work will focus on optimizing real-time inference and exploring advanced feature extraction techniques to further enhance detection performance. Full article
Show Figures

Figure 1

17 pages, 37081 KiB  
Article
MADet: A Multi-Dimensional Feature Fusion Model for Detecting Typical Defects in Weld Radiographs
by Shuai Xue, Wei Xu, Zhu Xiong, Jing Zhang and Yanyan Liang
Materials 2025, 18(15), 3646; https://doi.org/10.3390/ma18153646 - 3 Aug 2025
Viewed by 199
Abstract
Accurate weld defect detection is critical for ensuring structural safety and evaluating welding quality in industrial applications. Manual inspection methods have inherent limitations, including inefficiency and inadequate sensitivity to subtle defects. Existing detection models, primarily designed for natural images, struggle to adapt to [...] Read more.
Accurate weld defect detection is critical for ensuring structural safety and evaluating welding quality in industrial applications. Manual inspection methods have inherent limitations, including inefficiency and inadequate sensitivity to subtle defects. Existing detection models, primarily designed for natural images, struggle to adapt to the characteristic challenges of weld X-ray images, such as high noise, low contrast, and inter-defect similarity, particularly leading to missed detections and false positives for small defects. To address these challenges, a multi-dimensional feature fusion model (MADet), which is a multi-branch deep fusion network for weld defect detection, was proposed. The framework incorporates two key innovations: (1) A multi-scale feature fusion network integrated with lightweight attention residual modules to enhance the perception of fine-grained defect features by leveraging low-level texture information. (2) An anchor-based feature-selective detection head was used to improve the discrimination and localization accuracy for five typical defect categories. Extensive experiments on both public and proprietary weld defect datasets demonstrated that MADet achieved significant improvements over the state-of-the-art YOLO variants. Specifically, it surpassed the suboptimal model by 7.41% in mAP@0.5, indicating strong industrial applicability. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

27 pages, 4682 KiB  
Article
DERIENet: A Deep Ensemble Learning Approach for High-Performance Detection of Jute Leaf Diseases
by Mst. Tanbin Yasmin Tanny, Tangina Sultana, Md. Emran Biswas, Chanchol Kumar Modok, Arjina Akter, Mohammad Shorif Uddin and Md. Delowar Hossain
Information 2025, 16(8), 638; https://doi.org/10.3390/info16080638 - 27 Jul 2025
Viewed by 218
Abstract
Jute, a vital lignocellulosic fiber crop with substantial industrial and ecological relevance, continues to suffer considerable yield and quality degradation due to pervasive foliar pathologies. Traditional diagnostic modalities reliant on manual field inspections are inherently constrained by subjectivity, diagnostic latency, and inadequate scalability [...] Read more.
Jute, a vital lignocellulosic fiber crop with substantial industrial and ecological relevance, continues to suffer considerable yield and quality degradation due to pervasive foliar pathologies. Traditional diagnostic modalities reliant on manual field inspections are inherently constrained by subjectivity, diagnostic latency, and inadequate scalability across geographically distributed agrarian systems. To transcend these limitations, we propose DERIENet, a robust and scalable classification approach within a deep ensemble learning framework. It is meticulously engineered by integrating three high-performing convolutional neural networks—ResNet50, InceptionV3, and EfficientNetB0—along with regularization, batch normalization, and dropout strategies, to accurately classify jute leaf diseases such as Cercospora Leaf Spot, Golden Mosaic Virus, and healthy leaves. A key methodological contribution is the design of a novel augmentation pipeline, termed Geometric Localized Occlusion and Adaptive Rescaling (GLOAR), which dynamically modulates photometric and geometric distortions based on image entropy and luminance to synthetically upscale a limited dataset (920 images) into a significantly enriched and diverse dataset of 7800 samples, thereby mitigating overfitting and enhancing domain generalizability. Empirical evaluation, utilizing a comprehensive set of performance metrics—accuracy, precision, recall, F1-score, confusion matrices, and ROC curves—demonstrates that DERIENet achieves a state-of-the-art classification accuracy of 99.89%, with macro-averaged and weighted average precision, recall, and F1-score uniformly at 99.89%, and an AUC of 1.0 across all disease categories. The reliability of the model is validated by the confusion matrix, which shows that 899 out of 900 test images were correctly identified and that there was only one misclassification. Comparative evaluations of the various ensemble baselines, such as DenseNet201, MobileNetV2, and VGG16, and individual base learners demonstrate that DERIENet performs noticeably superior to all baseline models. It provides a highly interpretable, deployment-ready, and computationally efficient architecture that is ideal for integrating into edge or mobile platforms to facilitate in situ, real-time disease diagnostics in precision agriculture. Full article
Show Figures

Figure 1

21 pages, 3672 KiB  
Article
Research on a Multi-Type Barcode Defect Detection Model Based on Machine Vision
by Ganglong Duan, Shaoyang Zhang, Yanying Shang, Yongcheng Shao and Yuqi Han
Appl. Sci. 2025, 15(15), 8176; https://doi.org/10.3390/app15158176 - 23 Jul 2025
Viewed by 200
Abstract
Barcodes are ubiquitous in manufacturing and logistics, but defects can reduce decoding efficiency and disrupt the supply chain. Existing studies primarily focus on a single barcode type or rely on small-scale datasets, limiting generalizability. We propose Y8-LiBAR Net, a lightweight two-stage framework for [...] Read more.
Barcodes are ubiquitous in manufacturing and logistics, but defects can reduce decoding efficiency and disrupt the supply chain. Existing studies primarily focus on a single barcode type or rely on small-scale datasets, limiting generalizability. We propose Y8-LiBAR Net, a lightweight two-stage framework for multi-type barcode defect detection. In stage 1, a YOLOv8n backbone localizes 1D and 2D barcodes in real time. In stage 2, a dual-branch network integrating ResNet50 and ViT-B/16 via hierarchical attention performs three-class classification on cropped regions of interest (ROIs): intact, defective, and non-barcode. Experiments conducted on the public BarBeR dataset, covering planar/non-planar surfaces, varying illumination, and sensor noise, show that Y8-LiBARNet achieves a detection-stage mAP@0.5 = 0.984 (1D: 0.992; 2D: 0.977) with a peak F1 score of 0.970. Subsequent defect classification attains 0.925 accuracy, 0.925 recall, and a 0.919 F1 score. Compared with single-branch baselines, our framework improves overall accuracy by 1.8–3.4% and enhances defective barcode recall by 8.9%. A Cohen’s kappa of 0.920 indicates strong label consistency and model robustness. These results demonstrate that Y8-LiBARNet delivers high-precision real-time performance, providing a practical solution for industrial barcode quality inspection. Full article
Show Figures

Figure 1

18 pages, 4165 KiB  
Article
Localization and Pixel-Confidence Network for Surface Defect Segmentation
by Yueyou Wang, Zixuan Xu, Li Mei, Ruiqing Guo, Jing Zhang, Tingbo Zhang and Hongqi Liu
Sensors 2025, 25(15), 4548; https://doi.org/10.3390/s25154548 - 23 Jul 2025
Viewed by 233
Abstract
Surface defect segmentation based on deep learning has been widely applied in industrial inspection. However, two major challenges persist in specific application scenarios: first, the imbalanced area distribution between defects and the background leads to degraded segmentation performance; second, fine gaps within defects [...] Read more.
Surface defect segmentation based on deep learning has been widely applied in industrial inspection. However, two major challenges persist in specific application scenarios: first, the imbalanced area distribution between defects and the background leads to degraded segmentation performance; second, fine gaps within defects are prone to over-segmentation. To address these issues, this study proposes a two-stage image segmentation network that integrates a Defect Localization Module and a Pixel Confidence Module. In the first stage, the Defect Localization Module performs a coarse localization of defect regions and embeds the resulting feature vectors into the backbone of the second stage. In the second stage, the Pixel Confidence Module captures the probabilistic distribution of neighboring pixels, thereby refining the initial predictions. Experimental results demonstrate that the improved network achieves gains of 1.58%±0.80% in mPA, 1.35%±0.77% in mIoU on the self-built Carbon Fabric Defect Dataset and 2.66%±1.12% in mPA, 1.44%±0.79% in mIoU on the public Magnetic Tile Defect Dataset compared to the other network. These enhancements translate to more reliable automated quality assurance in industrial production environments. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 5200 KiB  
Article
DRFAN: A Lightweight Hybrid Attention Network for High-Fidelity Image Super-Resolution in Visual Inspection Applications
by Ze-Long Li, Bai Jiang, Liang Xu, Zhe Lu, Zi-Teng Wang, Bin Liu, Si-Ye Jia, Hong-Dan Liu and Bing Li
Algorithms 2025, 18(8), 454; https://doi.org/10.3390/a18080454 - 22 Jul 2025
Viewed by 314
Abstract
Single-image super-resolution (SISR) plays a critical role in enhancing visual quality for real-world applications, including industrial inspection and embedded vision systems. While deep learning-based approaches have made significant progress in SR, existing lightweight SR models often fail to accurately reconstruct high-frequency textures, especially [...] Read more.
Single-image super-resolution (SISR) plays a critical role in enhancing visual quality for real-world applications, including industrial inspection and embedded vision systems. While deep learning-based approaches have made significant progress in SR, existing lightweight SR models often fail to accurately reconstruct high-frequency textures, especially under complex degradation scenarios, resulting in blurry edges and structural artifacts. To address this challenge, we propose a Dense Residual Fused Attention Network (DRFAN), a novel lightweight hybrid architecture designed to enhance high-frequency texture recovery in challenging degradation conditions. Moreover, by coupling convolutional layers and attention mechanisms through gated interaction modules, the DRFAN enhances local details and global dependencies with linear computational complexity, enabling the efficient utilization of multi-level spatial information while effectively alleviating the loss of high-frequency texture details. To evaluate its effectiveness, we conducted ×4 super-resolution experiments on five public benchmarks. The DRFAN achieves the best performance among all compared lightweight models. Visual comparisons show that the DRFAN restores more accurate geometric structures, with up to +1.2 dB/+0.0281 SSIM gain over SwinIR-S on Urban100 samples. Additionally, on a domain-specific rice grain dataset, the DRFAN outperforms SwinIR-S by +0.19 dB in PSNR and +0.0015 in SSIM, restoring clearer textures and grain boundaries essential for industrial quality inspection. The proposed method provides a compelling balance between model complexity and image reconstruction fidelity, making it well-suited for deployment in resource-constrained visual systems and industrial applications. Full article
Show Figures

Figure 1

26 pages, 6714 KiB  
Article
End-of-Line Quality Control Based on Mel-Frequency Spectrogram Analysis and Deep Learning
by Jernej Mlinarič, Boštjan Pregelj and Gregor Dolanc
Machines 2025, 13(7), 626; https://doi.org/10.3390/machines13070626 - 21 Jul 2025
Viewed by 216
Abstract
This study presents a novel approach to the end-of-line (EoL) quality inspection of brushless DC (BLDC) motors by implementing a deep learning model that combines MEL diagrams, convolutional neural networks (CNNs) and bidirectional gated recurrent units (BiGRUs). The suggested system utilizes raw vibration [...] Read more.
This study presents a novel approach to the end-of-line (EoL) quality inspection of brushless DC (BLDC) motors by implementing a deep learning model that combines MEL diagrams, convolutional neural networks (CNNs) and bidirectional gated recurrent units (BiGRUs). The suggested system utilizes raw vibration and sound signals, recorded during the EoL quality inspection process at the end of an industrial manufacturing line. Recorded signals are transformed directly into Mel-frequency spectrograms (MFS) without pre-processing. To remove non-informative frequency bands and increase data relevance, a six-step data reduction procedure was implemented. Furthermore, to improve fault characterization, a reference spectrogram was generated from healthy motors. The neural network was trained on a highly imbalanced dataset, using oversampling and Bayesian hyperparameter optimization. The final classification algorithm achieved classification metrics with high accuracy (99%). Traditional EoL inspection methods often rely on threshold-based criteria and expert analysis, which can be inconsistent, time-consuming, and poorly scalable. These methods struggle to detect complex or subtle patterns associated with early-stage faults. The proposed approach addresses these issues by learning discriminative patterns directly from raw sensor data and automating the classification process. The results confirm that this approach can reduce the need for human expert engagement during commissioning, eliminate redundant inspection steps, and improve fault detection consistency, offering significant production efficiency gains. Full article
(This article belongs to the Special Issue Advances in Noises and Vibrations for Machines)
Show Figures

Figure 1

29 pages, 4633 KiB  
Article
Failure Detection of Laser Welding Seam for Electric Automotive Brake Joints Based on Image Feature Extraction
by Diqing Fan, Chenjiang Yu, Ling Sha, Haifeng Zhang and Xintian Liu
Machines 2025, 13(7), 616; https://doi.org/10.3390/machines13070616 - 17 Jul 2025
Viewed by 269
Abstract
As a key component in the hydraulic brake system of automobiles, the brake joint directly affects the braking performance and driving safety of the vehicle. Therefore, improving the quality of brake joints is crucial. During the processing, due to the complexity of the [...] Read more.
As a key component in the hydraulic brake system of automobiles, the brake joint directly affects the braking performance and driving safety of the vehicle. Therefore, improving the quality of brake joints is crucial. During the processing, due to the complexity of the welding material and welding process, the weld seam is prone to various defects such as cracks, pores, undercutting, and incomplete fusion, which can weaken the joint and even lead to product failure. Traditional weld seam detection methods include destructive testing and non-destructive testing; however, destructive testing has high costs and long cycles, and non-destructive testing, such as radiographic testing and ultrasonic testing, also have problems such as high consumable costs, slow detection speed, or high requirements for operator experience. In response to these challenges, this article proposes a defect detection and classification method for laser welding seams of automotive brake joints based on machine vision inspection technology. Laser-welded automotive brake joints are subjected to weld defect detection and classification, and image processing algorithms are optimized to improve the accuracy of detection and failure analysis by utilizing the high efficiency, low cost, flexibility, and automation advantages of machine vision technology. This article first analyzes the common types of weld defects in laser welding of automotive brake joints, including craters, holes, and nibbling, and explores the causes and characteristics of these defects. Then, an image processing algorithm suitable for laser welding of automotive brake joints was studied, including pre-processing steps such as image smoothing, image enhancement, threshold segmentation, and morphological processing, to extract feature parameters of weld defects. On this basis, a welding seam defect detection and classification system based on the cascade classifier and AdaBoost algorithm was designed, and efficient recognition and classification of welding seam defects were achieved by training the cascade classifier. The results show that the system can accurately identify and distinguish pits, holes, and undercutting defects in welds, with an average classification accuracy of over 90%. The detection and recognition rate of pit defects reaches 100%, and the detection accuracy of undercutting defects is 92.6%. And the overall missed detection rate is less than 3%, with both the missed detection rate and false detection rate for pit defects being 0%. The average detection time for each image is 0.24 s, meeting the real-time requirements of industrial automation. Compared with infrared and ultrasonic detection methods, the proposed machine-vision-based detection system has significant advantages in detection speed, surface defect recognition accuracy, and industrial adaptability. This provides an efficient and accurate solution for laser welding defect detection of automotive brake joints. Full article
Show Figures

Figure 1

72 pages, 22031 KiB  
Article
AI-Enabled Sustainable Manufacturing: Intelligent Package Integrity Monitoring for Waste Reduction in Supply Chains
by Mohammad Shahin, Ali Hosseinzadeh and F. Frank Chen
Electronics 2025, 14(14), 2824; https://doi.org/10.3390/electronics14142824 - 14 Jul 2025
Viewed by 380
Abstract
Despite advances in automation, the global manufacturing sector continues to rely heavily on manual package inspection, creating bottlenecks in production and increasing labor demands. Although disruptive technologies such as big data analytics, smart sensors, and machine learning have revolutionized industrial connectivity and strategic [...] Read more.
Despite advances in automation, the global manufacturing sector continues to rely heavily on manual package inspection, creating bottlenecks in production and increasing labor demands. Although disruptive technologies such as big data analytics, smart sensors, and machine learning have revolutionized industrial connectivity and strategic decision-making, real-time quality control (QC) on conveyor lines remains predominantly analog. This study proposes an intelligent package integrity monitoring system that integrates waste reduction strategies with both narrow and Generative AI approaches. Narrow AI models were deployed to detect package damage at full line speed, aiming to minimize manual intervention and reduce waste. Using a synthetically generated dataset of 200 paired top-and-side package images, we developed and evaluated 10 distinct detection pipelines combining various algorithms, image enhancements, model architectures, and data processing strategies. Several pipeline variants demonstrated high accuracy, precision, and recall, particularly those utilizing a YOLO v8 segmentation model. Notably, targeted preprocessing increased top-view MobileNetV2 accuracy from chance to 67.5%, advanced feature extractors with full enhancements achieved 77.5%, and a segmentation-based ensemble with feature extraction and binary classification reached 92.5% accuracy. These results underscore the feasibility of deploying AI-driven, real-time QC systems for sustainable and efficient manufacturing operations. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Intelligent Manufacturing)
Show Figures

Figure 1

21 pages, 24495 KiB  
Article
UAMS: An Unsupervised Anomaly Detection Method Integrating MSAA and SSPCAB
by Zhe Li, Wenhui Chen and Weijie Wang
Symmetry 2025, 17(7), 1119; https://doi.org/10.3390/sym17071119 - 12 Jul 2025
Viewed by 338
Abstract
Anomaly detection methods play a crucial role in automated quality control within modern manufacturing systems. In this context, unsupervised methods are increasingly favored due to their independence from large-scale labeled datasets. However, existing methods present limited multi-scale feature extraction ability and may fail [...] Read more.
Anomaly detection methods play a crucial role in automated quality control within modern manufacturing systems. In this context, unsupervised methods are increasingly favored due to their independence from large-scale labeled datasets. However, existing methods present limited multi-scale feature extraction ability and may fail to effectively capture subtle anomalies. To address these challenges, we propose UAMS, a pyramid-structured normalization flow framework that leverages the symmetry in feature recombination to harmonize multi-scale interactions. The proposed framework integrates a Multi-Scale Attention Aggregation (MSAA) module for cross-scale dynamic fusion, as well as a Self-Supervised Predictive Convolutional Attention Block (SSPCAB) for spatial channel attention and masked prediction learning. Experiments on the MVTecAD dataset show that UAMS largely outperforms state-of-the-art unsupervised methods, in terms of detection and localization accuracy, while maintaining high inference efficiency. For example, when comparing UAMS against the baseline model on the carpet category, the AUROC is improved from 90.8% to 94.5%, and AUPRO is improved from 91.0% to 92.9%. These findings validate the potential of the proposed method for use in real industrial inspection scenarios. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 3802 KiB  
Article
RT-DETR-FFD: A Knowledge Distillation-Enhanced Lightweight Model for Printed Fabric Defect Detection
by Gengliang Liang, Shijia Yu and Shuguang Han
Electronics 2025, 14(14), 2789; https://doi.org/10.3390/electronics14142789 - 11 Jul 2025
Viewed by 408
Abstract
Automated defect detection for printed fabric manufacturing faces critical challenges in balancing industrial-grade accuracy with real-time deployment efficiency. To address this, we propose RT-DETR-FFD, a knowledge-distilled detector optimized for printed fabric defect inspection. Firstly, the student model integrates a Fourier cross-stage mixer (FCSM). [...] Read more.
Automated defect detection for printed fabric manufacturing faces critical challenges in balancing industrial-grade accuracy with real-time deployment efficiency. To address this, we propose RT-DETR-FFD, a knowledge-distilled detector optimized for printed fabric defect inspection. Firstly, the student model integrates a Fourier cross-stage mixer (FCSM). This module disentangles defect features from periodic textile backgrounds through spectral decoupling. Secondly, we introduce FuseFlow-Net to enable dynamic multi-scale interaction, thereby enhancing discriminative feature representation. Additionally, a learnable positional encoding (LPE) module transcends rigid geometric constraints, strengthening contextual awareness. Furthermore, we design a dynamic correlation-guided loss (DCGLoss) for distillation optimization. Our loss leverages masked frequency-channel alignment and cross-domain fusion mechanisms to streamline knowledge transfer. Experiments demonstrate that the distilled model achieves an mAP@0.5 of 82.1%, surpassing the baseline RT-DETR-R18 by 6.3% while reducing parameters by 11.7%. This work establishes an effective paradigm for deploying high-precision defect detectors in resource-constrained industrial scenarios, advancing real-time quality control in textile manufacturing. Full article
Show Figures

Figure 1

20 pages, 4423 KiB  
Article
Pointer Meter Reading Recognition Based on YOLOv11-OBB Rotated Object Detection
by Xing Xu, Liming Wang, Chunhua Deng and Bi He
Appl. Sci. 2025, 15(13), 7460; https://doi.org/10.3390/app15137460 - 3 Jul 2025
Viewed by 353
Abstract
In the domain of intelligent inspection, the precise recognition of pointer meter readings is of paramount importance for monitoring equipment conditions. To address the challenges of insufficient robustness and diminished detection accuracy encountered in practical applications of existing methods for recognizing pointer meter [...] Read more.
In the domain of intelligent inspection, the precise recognition of pointer meter readings is of paramount importance for monitoring equipment conditions. To address the challenges of insufficient robustness and diminished detection accuracy encountered in practical applications of existing methods for recognizing pointer meter readings based on object detection, we propose a novel approach that integrates YOLOv11-OBB rotating object detection with adaptive template matching techniques. Firstly, the YOLOv11 object detection algorithm is employed, incorporating a rotational bounding box (OBB) detection mechanism; This effectively enhances the feature extraction capabilities related to pointer rotation direction and dial center, thereby boosting detection robustness. Subsequently, an enhanced angle resolution algorithm is leveraged to develop a mapping model that establishes a relationship between pointer the deflection angle and the instrument range, facilitating precise reading calculation. Experimental findings demonstrate that the proposed method achieves a mean Average Precision (mAP) of 99.1% in a self-compiled pointer instrument dataset. The average relative error of readings is 0.41568%, with a maximum relative error of less than 1.1468%. Furthermore, the method exhibits robustness and reliability when handling low-quality meter images characterized by blur, darkness, overexposure, and tilt. The proposed approach provides a highly adaptable and reliable solution for pointer meter reading recognition in the intelligent industrial field, with significant practical value. Full article
Show Figures

Figure 1

Back to TopTop