sensors-logo

Journal Browser

Journal Browser

Computer Vision and Sensing Technologies for Industrial Quality Inspection: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 2730

Special Issue Editors


E-Mail Website
Guest Editor
Department of Industrial Engineering and Management, Chaoyang University of Technology, Taichung 413310, Taiwan
Interests: computer vision; optical inspection; quality management; automated industrial inspection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
Interests: image processing; computer vision; signal filtering; artificial intelligence; grey system with applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Industrial Engineering and Management, Chaoyang University of Technology, Taichung 413310, Taiwan
Interests: ergonomics; work physiology; human-computer interface
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Although quality inspections play an essential role in successful operations, finding effective ways to carry out them can be a challenge. Combined with advanced computer vision and sensing technologies, quality inspection could become essential in various intelligent applications in smart manufacturing and production, such as object detection, classification, tracking, and counting. The aim is to achieve human-level or higher precision in quality inspection with automation. Computer-vision-based applications minimize human intervention, optimize operational efficiency, and reduce labor costs. In addition, new sensing technologies have provided us with excellent ability to measure, inspect, sort, and grade products effectively and efficiently.

This Special Issue will include research papers reporting on case studies of artificial intelligence techniques. It will showcase the need to optimize algorithms, inference frameworks, and hardware accelerators to obtain good performance in quality inspection. It will mainly focus on computer vision and sensing technologies for industrial quality inspection, with possible topics including, but not limited to, imaging techniques, image processing methods, vision systems, and system optimization. Articles on industrial inspection are also welcome on topics such as quality inspection using machine learning and data-driven strategies. Submissions of review articles and original research papers are invited for this Special Issue.

Prof. Dr. Hong-Dar Lin
Prof. Dr. Cheng-Hsiung Hsieh
Prof. Dr. Hsin-Chieh Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • sensing technologies
  • industrial quality inspection
  • automatic optical inspection
  • artificial intelligence techniques
  • machine learning
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 10571 KiB  
Article
Evaluation of Network Design and Solutions of Fisheye Camera Calibration for 3D Reconstruction
by Sina Rezaei and Hossein Arefi
Sensors 2025, 25(6), 1789; https://doi.org/10.3390/s25061789 - 13 Mar 2025
Viewed by 589
Abstract
The evolution of photogrammetry has been significantly influenced by advancements in camera technology, particularly the emergence of spherical cameras. These devices offer extensive photographic coverage and are increasingly utilised in many photogrammetry applications due to their significant user-friendly configuration, especially in their low-cost [...] Read more.
The evolution of photogrammetry has been significantly influenced by advancements in camera technology, particularly the emergence of spherical cameras. These devices offer extensive photographic coverage and are increasingly utilised in many photogrammetry applications due to their significant user-friendly configuration, especially in their low-cost versions. Despite their advantages, these cameras are subject to high image distortion. This necessitates specialised calibration solutions related to fisheye images, which represent the primary geometry of the raw files. This paper evaluates fisheye calibration processes for the effective utilisation of low-cost spherical cameras, for the purpose of 3D reconstruction and the verification of geometric stability. Calibration optical parameters include focal length, pixel positions, and distortion coefficients. Emphasis was placed on the evaluation of solutions for camera calibration, calibration network design, and the assessment of software or toolboxes that support the correspondent geometry and calibration for processing. The efficiency in accuracy, correctness, computational time, and stability parameters was assessed with the influence of calibration parameters based on the accuracy of the 3D reconstruction. The assessment was conducted using a previous case study of graffiti on an underpass in Wiesbaden, Germany. The robust calibration solution is a two-step calibration process, including a pre-calibration stage and the consideration of the best possible network design. Fisheye undistortion was performed using OpenCV, and finally, calibration parameters were optimized with self-calibration through bundle adjustment to achieve both calibration parameters and 3D reconstruction using Agisoft Metashape software. In comparison to 3D calibration, self-calibration, and a pre-calibration strategy, the two-step calibration process has demonstrated an average improvement of 2826 points in the 3D sparse point cloud and a 0.22 m decrease in the re-projection error value derived from the front lens images of two individual spherical cameras. The accuracy and correctness of the 3D point cloud and the statistical analysis of parameters in the two-step calibration solution are presented as a result of the quality assessment of this paper and in comparison with the 3D point cloud produced by a laser scanner. Full article
Show Figures

Figure 1

29 pages, 24098 KiB  
Article
A Deep Transfer Learning-Based Visual Inspection System for Assembly Defects in Similar Types of Manual Tool Products
by Hong-Dar Lin, Hsiang-Ling Wu and Chou-Hsien Lin
Sensors 2025, 25(6), 1645; https://doi.org/10.3390/s25061645 - 7 Mar 2025
Viewed by 548
Abstract
This study introduces an advanced inspection system for manual tool assembly, focusing on defect detection and classification in flex-head ratchet wrenches as a modern alternative to traditional inspection methods. Using a deep learning R-CNN approach with transfer learning, specifically utilizing the AlexNet architecture, [...] Read more.
This study introduces an advanced inspection system for manual tool assembly, focusing on defect detection and classification in flex-head ratchet wrenches as a modern alternative to traditional inspection methods. Using a deep learning R-CNN approach with transfer learning, specifically utilizing the AlexNet architecture, the system accurately identifies and classifies assembly defects across similar tools. This study demonstrates how a pre-trained defect detection model for older manual tool models can be efficiently adapted to new models with only moderate amounts of new samples and fine-tuning. Experimental evaluations at three assembly stations show that the AlexNet model achieves a classification accuracy of 98.67% at the station with the highest defect variety, outperforming the R-CNN model with randomly initialized weights. Even with a 40% reduction in sample size for new products, the AlexNet model maintains a classification accuracy of 98.66%. Additionally, compared to R-CNN, it improves average effectiveness by 9% and efficiency by 26% across all stations. A sensitivity analysis further reveals that the proposed method reduces training samples by 50% at 50% similarity while enhancing effectiveness by 13.06% and efficiency by 5.31%. Full article
Show Figures

Figure 1

17 pages, 7393 KiB  
Article
Laser Stripe Centerline Extraction Method for Deep-Hole Inner Surfaces Based on Line-Structured Light Vision Sensing
by Huifu Du, Daguo Yu, Xiaowei Zhao and Ziyang Zhou
Sensors 2025, 25(4), 1113; https://doi.org/10.3390/s25041113 - 12 Feb 2025
Viewed by 618
Abstract
This paper proposes a point cloud post-processing method based on the minimum spanning tree (MST) and depth-first search (DFS) to extract laser stripe centerlines from the complex inner surfaces of deep holes. Addressing the limitations of traditional image processing methods, which are affected [...] Read more.
This paper proposes a point cloud post-processing method based on the minimum spanning tree (MST) and depth-first search (DFS) to extract laser stripe centerlines from the complex inner surfaces of deep holes. Addressing the limitations of traditional image processing methods, which are affected by burrs and low-frequency random noise, this method utilizes 360° structured light to illuminate the inner wall of the deep hole. A sensor captures laser stripe images, and the Steger algorithm is employed to extract sub-pixel point clouds. Subsequently, an MST is used to construct the point cloud connectivity structure, while DFS is applied for path search and noise removal to enhance extraction accuracy. Experimental results demonstrate that this method significantly improves extraction accuracy, with a dice similarity coefficient (DSC) approaching 1 and a maximum Hausdorff distance (HD) of 3.3821 pixels, outperforming previous methods. This study provides an efficient and reliable solution for the precise extraction of complex laser stripes and lays a solid data foundation for subsequent feature parameter calculations and 3D reconstruction. Full article
Show Figures

Figure 1

13 pages, 10686 KiB  
Article
HubNet: An E2E Model for Wheel Hub Text Detection and Recognition Using Global and Local Features
by Yue Zeng and Cai Meng
Sensors 2024, 24(19), 6183; https://doi.org/10.3390/s24196183 - 24 Sep 2024
Cited by 1 | Viewed by 745
Abstract
Automatic detection and recognition of wheel hub text, which can boost the efficiency and accuracy of product information recording, are undermined by the obscurity and orientation variability of text on wheel hubs. To address these issues, this paper constructs a wheel hub text [...] Read more.
Automatic detection and recognition of wheel hub text, which can boost the efficiency and accuracy of product information recording, are undermined by the obscurity and orientation variability of text on wheel hubs. To address these issues, this paper constructs a wheel hub text dataset and proposes a wheel hub text detection and recognition model called HubNet. The dataset captured images on real industrial production line scenes, including 446 images, 934 word instances, and 2947 character instances. HubNet is an end-to-end text detection and recognition model, not only comprising conventional detection and recognition heads but also incorporating a feature cross-fusion module, which improves the accuracy of recognizing wheel hub texts by utilizing both global and local features. Experimental results show that on the wheel hub text dataset, the HubNet achieves an accuracy of 86.5%, a recall of 79.4%, and an F1-score of 0.828, and the feature cross-fusion module increases the accuracy by 2% to 4%. The wheel hub dataset and the HubNet offer a significant reference for automatic detection and recognition of wheel hub text. Full article
Show Figures

Figure 1

Back to TopTop