Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (132)

Search Parameters:
Keywords = automatic surface inspection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6159 KiB  
Article
A Machine Vision System for Gear Defect Detection
by Pevril Demir Arı, Fatih Akkoyun and Ali Ercetin
Processes 2025, 13(6), 1727; https://doi.org/10.3390/pr13061727 - 31 May 2025
Viewed by 942
Abstract
This study introduces a machine vision system (MVS) developed for the inspection and removal of defective gears to enhance the efficiency of mass production processes. The system employs a rotary table that transports gears through the inspection stage at a controlled speed. Various [...] Read more.
This study introduces a machine vision system (MVS) developed for the inspection and removal of defective gears to enhance the efficiency of mass production processes. The system employs a rotary table that transports gears through the inspection stage at a controlled speed. Various defects, including missing teeth, surface irregularities, and dimensional deviations, are reliably identified through this method. Faulty gears are automatically separated from the production line using a pneumatic actuator. Experimental evaluations confirm the system’s high accuracy and consistency, with a defect detection standard deviation of less than 1%. This level of deviation corresponds to a defect detection accuracy exceeding 98%, with both precision and recall consistently surpassing 96%. By reducing manual intervention and accelerating quality control procedures, the proposed system contributes to improved production efficiency and product quality, offering a practical and effective solution for manufacturing environments. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

27 pages, 18217 KiB  
Article
Landslide Identification in UAV Images Through Recognition of Landslide Boundaries and Ground Surface Cracks
by Zhan Cheng, Wenping Gong, Michel Jaboyedoff, Jun Chen, Marc-Henri Derron and Fumeng Zhao
Remote Sens. 2025, 17(11), 1900; https://doi.org/10.3390/rs17111900 - 30 May 2025
Cited by 1 | Viewed by 804
Abstract
Landslide is one of the most frequent and destructive geohazards around the world. The accurate identification of potential landslides plays a vital role in the management of landslide risk. The use of unmanned aerial vehicle (UAV) techniques has recently gained much popularity in [...] Read more.
Landslide is one of the most frequent and destructive geohazards around the world. The accurate identification of potential landslides plays a vital role in the management of landslide risk. The use of unmanned aerial vehicle (UAV) techniques has recently gained much popularity in landslide assessment; however, most of the current UAV-image-based landslide identifications rely upon visual inspections. In this paper, an image-analysis-based landslide identification framework is developed to detect the landslides in UAV images by recognizing the landslide boundaries and ground surface cracks. In this framework, object-oriented image analysis is undertaken to identify the potential landslide boundaries in the input UAV images and the ground surface cracks in the UAV images are recognized by an automatic ground surface crack recognition model, which is trained through a deep transfer learning strategy. With the aid of this transfer learning strategy, the crack recognition model trained can take advantage of the feature of local ground surface cracks in the concerned area and the crack recognition model that has well been developed based on the samples of ground surface cracks collected from different landslide sites. Then, the landslide boundaries and the ground surface cracks obtained are fused based on Boolean operations; the fusion results can allow for informed landslide identification in UAV Images. To illustrate the effectiveness of the proposed image-analysis-based landslide identification framework, the Heifangtai Terrace of Gansu, China, was selected as a study area, and the identification results are further validated through comparisons with the field survey results. Full article
(This article belongs to the Special Issue Artificial Intelligence and Remote Sensing for Geohazards)
Show Figures

Graphical abstract

24 pages, 103560 KiB  
Article
Automated Crack Width Measurement in 3D Models: A Photogrammetric Approach with Image Selection
by Huseyin Yasin Ozturk and Emanuele Zappa
Information 2025, 16(6), 448; https://doi.org/10.3390/info16060448 - 27 May 2025
Viewed by 617
Abstract
Structural cracks can critically undermine infrastructure integrity, driving the need for precise, scalable inspection methods beyond conventional visual or 2D image-based approaches. This study presents an automated system integrating photogrammetric 3D reconstruction with deep learning to quantify crack dimensions in a spatial context. [...] Read more.
Structural cracks can critically undermine infrastructure integrity, driving the need for precise, scalable inspection methods beyond conventional visual or 2D image-based approaches. This study presents an automated system integrating photogrammetric 3D reconstruction with deep learning to quantify crack dimensions in a spatial context. Multiple images are processed via Agisoft Metashape to generate high-fidelity 3D meshes. Then, a subset of images are automatically selected based on camera orientation and distance, and a deep learning algorithm is applied to detect cracks in 2D images. The detected crack edges are projected onto a 3D mesh, enabling width measurements grounded in the structure’s true geometry rather than perspective-distorted 2D approximations. This methodology addresses the key limitations of traditional methods (parallax, occlusion, and surface curvature errors) and shows how these limitations can be mitigated by spatially anchoring measurements to the 3D model. Laboratory validation confirms the system’s robustness, with controlled tests highlighting the importance of near-orthogonal camera angles and ground sample distance (GSD) thresholds to ensure crack detectability. By synthesizing photogrammetry and a convolutional neural network (CNN), the framework eliminates subjectivity in inspections, enhances safety by reducing manual intervention, and provides engineers with dimensionally accurate data for maintenance decisions. Full article
(This article belongs to the Special Issue Crack Identification Based on Computer Vision)
Show Figures

Figure 1

18 pages, 5373 KiB  
Article
Novel Spatio-Temporal Joint Learning-Based Intelligent Hollowing Detection in Dams for Low-Data Infrared Images
by Lili Zhang, Zihan Jin, Yibo Wang, Ziyi Wang, Zeyu Duan, Taoran Qi and Rui Shi
Sensors 2025, 25(10), 3199; https://doi.org/10.3390/s25103199 - 19 May 2025
Viewed by 470
Abstract
Concrete dams are prone to various hidden dangers after long-term operation and may lead to significant risk if failed to be detected in time. However, the existing hollowing detection techniques are few as well as inefficient when facing the demands of comprehensive coverage [...] Read more.
Concrete dams are prone to various hidden dangers after long-term operation and may lead to significant risk if failed to be detected in time. However, the existing hollowing detection techniques are few as well as inefficient when facing the demands of comprehensive coverage and intelligent management for regular inspections. Hence, we proposed an innovative, non-destructive infrared inspection method via constructed dataset and proposed deep learning algorithms. We first modeled the surface temperature field variation of concrete dams as a one-dimensional, non-stationary partial differential equation with Robin boundary. We also designed physics-informed neural networks (PINNs) with multi-subnets to compute the temperature value automatically. Secondly, we obtained the time-domain features in one-dimensional space and used the diffusion techniques to obtain the synthetic infrared images with dam hollowing by converting the one-dimensional temperatures into two-dimensional ones. Finally, we employed adaptive joint learning to obtain the spatio-temporal features. We designed the experiments on the dataset we constructed, and we demonstrated that the method proposed in this paper can handle the low-data (few shots real images) issue. Our method achieved 94.7% of recognition accuracy based on few shots real images, which is 17.9% and 5.8% higher than maximum entropy and classical OTSU methods, respectively. Furthermore, it attained a sub-10% cross-sectional calculation error for hollowing dimensions, outperforming maximum entropy (70.5% error reduction) and OTSU (7.4% error reduction) methods, which shows our method being one novel method for automated intelligent hollowing detection. Full article
Show Figures

Figure 1

22 pages, 9648 KiB  
Article
Three-Dimensional Real-Scene-Enhanced GNSS/Intelligent Vision Surface Deformation Monitoring System
by Yuanrong He, Weijie Yang, Qun Su, Qiuhua He, Hongxin Li, Shuhang Lin and Shaochang Zhu
Appl. Sci. 2025, 15(9), 4983; https://doi.org/10.3390/app15094983 - 30 Apr 2025
Viewed by 660
Abstract
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene [...] Read more.
With the acceleration of urbanization, surface deformation monitoring has become crucial. Existing monitoring systems face several challenges, such as data singularity, the poor nighttime monitoring quality of video surveillance, and fragmented visual data. To address these issues, this paper presents a 3D real-scene (3DRS)-enhanced GNSS/intelligent vision surface deformation monitoring system. The system integrates GNSS monitoring terminals and multi-source meteorological sensors to accurately capture minute displacements at monitoring points and multi-source Internet of Things (IoT) data, which are then automatically stored in MySQL databases. To enhance the functionality of the system, the visual sensor data are fused with 3D models through streaming media technology, enabling 3D real-scene augmented reality to support dynamic deformation monitoring and visual analysis. WebSocket-based remote lighting control is implemented to enhance the quality of video data at night. The spatiotemporal fusion of UAV aerial data with 3D models is achieved through Blender image-based rendering, while edge detection is employed to extract crack parameters from intelligent inspection vehicle data. The 3DRS model is constructed through UAV oblique photography, 3D laser scanning, and the combined use of SVSGeoModeler and SketchUp. A visualization platform for surface deformation monitoring is built on the 3DRS foundation, adopting an “edge collection–cloud fusion–terminal interaction” approach. This platform dynamically superimposes GNSS and multi-source IoT monitoring data onto the 3D spatial base, enabling spatiotemporal correlation analysis of millimeter-level displacements and early risk warning. Full article
Show Figures

Figure 1

27 pages, 16583 KiB  
Article
Reinforcement Learning Approach to Optimizing Profilometric Sensor Trajectories for Surface Inspection
by Sara Roos-Hoefgeest, Mario Roos-Hoefgeest, Ignacio Álvarez and Rafael C. González
Sensors 2025, 25(7), 2271; https://doi.org/10.3390/s25072271 - 3 Apr 2025
Viewed by 695
Abstract
High-precision surface defect detection in manufacturing often relies on laser triangulation profilometric sensors for detailed surface measurements, providing detailed and accurate surface measurements over a line. Accurate motion between the sensor and workpiece, usually managed by robotic systems, is critical for maintaining optimal [...] Read more.
High-precision surface defect detection in manufacturing often relies on laser triangulation profilometric sensors for detailed surface measurements, providing detailed and accurate surface measurements over a line. Accurate motion between the sensor and workpiece, usually managed by robotic systems, is critical for maintaining optimal distance and orientation. This paper introduces a novel Reinforcement Learning (RL) approach to optimize inspection trajectories for profilometric sensors based on the boustrophedon scanning method. The RL model dynamically adjusts sensor position and tilt to ensure consistent profile distribution and high-quality scanning. We use a simulated environment replicating real-world conditions, including sensor noise and surface irregularities, to plan trajectories offline using CAD models. Key contributions include designing a state space, action space, and reward function tailored for profilometric sensor inspection. The Proximal Policy Optimization (PPO) algorithm trains the RL agent to optimize these trajectories effectively. Validation involves testing the model on various parts in simulation and performing real-world inspection with a UR3e robotic arm, demonstrating the approach’s practicality and effectiveness. Full article
(This article belongs to the Special Issue Applications of Manufacturing and Measurement Sensors: 2nd Edition)
Show Figures

Graphical abstract

25 pages, 11695 KiB  
Article
Multi-Scale Crack Detection and Quantification of Concrete Bridges Based on Aerial Photography and Improved Object Detection Network
by Liming Zhou, Haowen Jia, Shang Jiang, Fei Xu, Hao Tang, Chao Xiang, Guoqing Wang, Hemin Zheng and Lingkun Chen
Buildings 2025, 15(7), 1117; https://doi.org/10.3390/buildings15071117 - 29 Mar 2025
Cited by 2 | Viewed by 1158
Abstract
Regular crack detection is essential for extending the service life of bridges. However, the image data collected during bridge crack inspections are complex to convert into physical information and construct intuitive and comprehensive Three-Dimensional (3D) models incorporating crack information. An intelligent crack detection [...] Read more.
Regular crack detection is essential for extending the service life of bridges. However, the image data collected during bridge crack inspections are complex to convert into physical information and construct intuitive and comprehensive Three-Dimensional (3D) models incorporating crack information. An intelligent crack detection method for bridge surface damage based on Unmanned Aerial Vehicles (UAVs) is proposed for these challenges, incorporating a three-stage detection, quantification, and visualization process. This method enables automatic crack detection, quantification, and localization in a 3D model, generating a bridge model that includes crack details and distribution. The key contributions of this method are as follows: (1) The DCN-BiFPN-EMA-YOLO (DBE-YOLO) crack detection network is introduced, which improves the model’s ability to extract crack features from complex backgrounds and enhances its multi-scale detection capability for accurate detection; (2) a more comprehensive crack quantification method is proposed, integrating the crack automation detection system for accurate crack quantification and efficient processing; (3) crack information is mapped onto the 3D model by computing the camera pose for each image in the 3D model for intuitive crack visualization. Experimental results from tests on a concrete beam and an urban bridge demonstrate that the proposed method accurately identifies and quantifies crack images captured by UAVs. The DBE-YOLO network achieves an accuracy of 96.79% and an F1 score of 88.51%, improving accuracy by 3.19% and the F1 score by 3.8% compared to the original model. The quantification accuracy is within 10% of the error margin of traditional manual inspection. A 3D bridge model was also constructed and integrated with crack information. Full article
Show Figures

Figure 1

20 pages, 6467 KiB  
Article
A Lightweight TA-YOLOv8 Method for the Spot Weld Surface Anomaly Detection of Body in White
by Weijie Liu, Miao Jia, Shuo Zhang, Siyu Zhu, Jin Qi and Jie Hu
Appl. Sci. 2025, 15(6), 2931; https://doi.org/10.3390/app15062931 - 8 Mar 2025
Cited by 1 | Viewed by 1237
Abstract
The deep learning architecture YOLO (You Only Look Once) has demonstrated its superior visual detection performance in various computer vision tasks and has been widely applied in the field of automatic surface defect detection. In this paper, we propose a lightweight YOLOv8-based method [...] Read more.
The deep learning architecture YOLO (You Only Look Once) has demonstrated its superior visual detection performance in various computer vision tasks and has been widely applied in the field of automatic surface defect detection. In this paper, we propose a lightweight YOLOv8-based method for the quality inspection of car body welding spots. We developed a TA-YOLOv8 network structure which has an improved Task-Aligned (TA) head detection, designed to handle a small sample size, imbalanced positive and negative samples, and high-noise characteristics of Body-in-White welding spot data. By learning with fewer parameters, the model achieves more efficient and accurate classification. Additionally, our algorithm framework can perform anomaly segmentation and classification on our open-world raw datasets obtained from actual production environments. The experimental results show that the lightweight module improves the processing speed by an average of 2.8%, with increases in detection the mAP@50-95 and recall rate of 1.35% and 0.1226, respectively. Full article
(This article belongs to the Special Issue Motion Control for Robots and Automation)
Show Figures

Figure 1

15 pages, 9352 KiB  
Article
Detection of Chips on the Threaded Part of Cosmetic Glass Bottles
by Daiki Tomita and Yue Bao
J. Imaging 2025, 11(3), 77; https://doi.org/10.3390/jimaging11030077 - 4 Mar 2025
Cited by 1 | Viewed by 786
Abstract
Recycled glass has been the focus of attention owing to its role in reducing plastic waste and further increasing the demand for glass containers. Cosmetics glass bottles require strict quality inspections because of the frequent handling, safety concerns, and other factors. During manufacturing, [...] Read more.
Recycled glass has been the focus of attention owing to its role in reducing plastic waste and further increasing the demand for glass containers. Cosmetics glass bottles require strict quality inspections because of the frequent handling, safety concerns, and other factors. During manufacturing, glass bottles sometimes develop chips on the top surface, rim, or screw threads of the bottle mouth. Conventionally, these chips are visually inspected by inspectors; however, this process is time consuming and prone to inaccuracies. To address these issues, automatic inspection using image processing has been explored. Existing methods, such as dynamic luminance value correction and ring-shaped inspection gates, have limitations: the former relies on visible light, which is strongly affected by natural light, and the latter acquires images directly from above, resulting in low accuracy in detecting chips on the lower part of screw threads. To overcome these challenges, this study proposes a method that combines infrared backlighting and image processing to determine the range of screw threads and detect chips accurately. Experiments were conducted in an experimental environment replicating an actual factory production line. The results confirmed that the detection accuracy of chipping was 99.6% for both good and defective bottles. This approach reduces equipment complexity compared to conventional methods while maintaining high inspection accuracy, contributing to the productivity and quality control of glass bottle manufacturing. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

15 pages, 7826 KiB  
Article
Tongue Image Segmentation and Constitution Identification with Deep Learning
by Chien-Ho Lin, Sien-Hung Yang and Jiann-Der Lee
Electronics 2025, 14(4), 733; https://doi.org/10.3390/electronics14040733 - 13 Feb 2025
Viewed by 1356
Abstract
Traditional Chinese medicine (TCM) gathers patient information through inspection, olfaction, inquiry, and palpation, analyzing and interpreting the data to make a diagnosis and offer appropriate treatment. Traditionally, the interpretation of this information relies heavily on the physician’s personal knowledge and experience. However, diagnostic [...] Read more.
Traditional Chinese medicine (TCM) gathers patient information through inspection, olfaction, inquiry, and palpation, analyzing and interpreting the data to make a diagnosis and offer appropriate treatment. Traditionally, the interpretation of this information relies heavily on the physician’s personal knowledge and experience. However, diagnostic outcomes can vary depending on the physician’s clinical experience and subjective judgment. This study employs AI methods to focus on localized tongue assessment, developing an automatic tongue body segmentation using the deep learning network “U-Net” through a series of optimization processes applied to tongue surface images. Furthermore, “ResNet34” is utilized for the identification of “cold”, “neutral”, and “hot” constitutions, creating a system that enhances the consistency and reliability of diagnostic results related to the tongue. The final results demonstrate that the AI interpretation accuracy of this system reaches the diagnostic level of junior TCM practitioners (those who have passed the TCM practitioner assessment with ≤5 years of experience). The framework and findings of this study can serve as (1) a foundational step for the future integration of pulse information and electronic medical records, (2) a tool for personalized preventive medicine, and (3) a training resource for TCM students learning to diagnose tongue constitutions such as “cold”, “neutral”, and “hot”. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision, 2nd Edition)
Show Figures

Figure 1

19 pages, 11928 KiB  
Article
Point Cloud Vibration Compensation Algorithm Based on an Improved Gaussian–Laplacian Filter
by Wanhe Du, Xianfeng Yang and Jinghui Yang
Electronics 2025, 14(3), 573; https://doi.org/10.3390/electronics14030573 - 31 Jan 2025
Viewed by 902
Abstract
In industrial environments, steel plate surface inspection plays a crucial role in quality control. However, vibrations during laser scanning can significantly impact measurement accuracy. While traditional vibration compensation methods rely on complex dynamic modeling, they often face challenges in practical implementation and generalization. [...] Read more.
In industrial environments, steel plate surface inspection plays a crucial role in quality control. However, vibrations during laser scanning can significantly impact measurement accuracy. While traditional vibration compensation methods rely on complex dynamic modeling, they often face challenges in practical implementation and generalization. This paper introduces a novel point cloud vibration compensation algorithm that combines an improved Gaussian–Laplacian filter with adaptive local feature analysis. The key innovations include (1) an FFT-based vibration factor extraction method that effectively identifies vibration trends, (2) an adaptive windowing strategy that automatically adjusts based on local geometric features, and (3) a weighted compensation mechanism that preserves surface details while reducing vibration noise. The algorithm demonstrated significant improvements in signal-to-noise ratio: 15.78% for simulated data, 6.81% for precision standard parts, and 12.24% for actual industrial measurements. Experimental validation confirms the algorithm’s effectiveness across different conditions. This approach achieved a practical, implementable solution for surface inspection in steel plate surface inspection. Full article
Show Figures

Figure 1

18 pages, 5011 KiB  
Article
Improving Industrial Quality Control: A Transfer Learning Approach to Surface Defect Detection
by Ângela Semitela, Miguel Pereira, António Completo, Nuno Lau and José P. Santos
Sensors 2025, 25(2), 527; https://doi.org/10.3390/s25020527 - 17 Jan 2025
Cited by 5 | Viewed by 1920
Abstract
To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) [...] Read more.
To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) surfaces that fused dual-modal information at the decision level, and an online network for information dispatching and visualization. Three decision-making algorithms were tested for implementation: a new model built and trained from scratch and transfer learning of pre-trained networks (ResNet-50 and Inception V3). The results revealed that the two illumination modes employed widened the type of defects that could be identified with this system, while maintaining its lower computational complexity by performing multi-modal fusion at the decision level. Furthermore, the pre-trained networks achieved higher accuracies on defect classification compared to the self-built network, with ResNet-50 displaying higher accuracy. The inspection system consistently obtained fast and accurate surface classifications because it imposed OK classification on models trained with images from both illumination modes. The obtained surface information was then successfully sent to a server to be forwarded to a graphical user interface for visualization. The developed system showed considerable robustness, demonstrating its potential as an efficient tool for industrial quality control. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

20 pages, 12095 KiB  
Article
A Deep Learning-Based Watershed Feature Fusion Approach for Tunnel Crack Segmentation in Complex Backgrounds
by Haozheng Wang, Qiang Wang, Weikang Zhang, Junli Zhai, Dongyang Yuan, Junhao Tong, Xiongyao Xie, Biao Zhou and Hao Tian
Materials 2025, 18(1), 142; https://doi.org/10.3390/ma18010142 - 1 Jan 2025
Cited by 1 | Viewed by 998
Abstract
As highway tunnel operations continue over time, structural defects, particularly cracks, have been observed to increase annually. Coupled with the rapid expansion of tunnel networks, traditional manual inspection methods have proven inadequate to meet current demands. In recent years, machine vision and deep [...] Read more.
As highway tunnel operations continue over time, structural defects, particularly cracks, have been observed to increase annually. Coupled with the rapid expansion of tunnel networks, traditional manual inspection methods have proven inadequate to meet current demands. In recent years, machine vision and deep learning technologies have gained significant attention in civil engineering for the detection and analysis of structural defects. However, rapid and accurate defect identification in highway tunnels presents challenges due to complex background conditions, numerous interfering factors, and the relatively low proportion of cracks within the structure. Additionally, the intensive labor requirements and limited efficiency in labeling training datasets for deep learning pose significant constraints on the deployment of intelligent crack segmentation algorithms. To address these limitations, this study proposes an automatic labeling and optimization algorithm for crack sample sets, utilizing crack features and the watershed algorithm to enable efficient automated segmentation with minimal human input. Furthermore, the deep learning-based crack segmentation network was optimized through comparative analysis of various network depths and residual structure configurations to achieve the best possible model performance. Enhanced accuracy was attained by incorporating axis extraction and watershed filling algorithms to refine segmentation outcomes. Under diverse lining surface conditions and multiple interference factors, the proposed approach achieved a crack segmentation accuracy of 98.78%, with an Intersection over Union (IoU) of 72.41%, providing a robust solution for crack segmentation in tunnels with complex backgrounds. Full article
Show Figures

Figure 1

25 pages, 12595 KiB  
Article
Fusion-Based Damage Segmentation for Multimodal Building Façade Images from an End-to-End Perspective
by Pujin Wang, Jiehui Wang, Qiong Liu, Lin Fang and Jie Xiao
Buildings 2025, 15(1), 63; https://doi.org/10.3390/buildings15010063 - 27 Dec 2024
Cited by 1 | Viewed by 1103
Abstract
Multimodal image data have found widespread applications in visual-based building façade damage detection in recent years, offering comprehensive inspection of façade surfaces with the assistance of drones and infrared thermography. However, the comprehensive integration of such complementary data has been hindered by low [...] Read more.
Multimodal image data have found widespread applications in visual-based building façade damage detection in recent years, offering comprehensive inspection of façade surfaces with the assistance of drones and infrared thermography. However, the comprehensive integration of such complementary data has been hindered by low levels of automation due to the absence of properly developed methods, resulting in high cost and low efficiency. Thus, this paper proposes an automatic end-to-end building façade damage detection method by integrating multimodal image registration, infrared–visible image fusion (IVIF), and damage segmentation. An infrared and visible image dataset consisting of 1761 pairs encompassing 4 main types of façade damage has been constructed for processing and training. A novel infrared–visible image registration method using main orientation assignment for feature point extraction is developed, reaching a high RMSE of 14.35 to align the multimodal images. Then, a deep learning-based infrared–visible image fusion (IVIF) network is trained to preserve damage characteristics between the modalities. For damage detection, a relatively high mean average precision (mAP) result of 85.4% is achieved by comparing four instance segmentation models, affirming the effective utilization of IVIF results. Full article
(This article belongs to the Special Issue Low-Carbon and Green Materials in Construction—2nd Edition)
Show Figures

Figure 1

13 pages, 4124 KiB  
Article
Intelligent Detection Method for Surface Defects of Particleboard Based on Super-Resolution Reconstruction
by Haiyan Zhou, Haifei Xia, Chenlong Fan, Tianxiang Lan, Ying Liu, Yutu Yang, Yinxi Shen and Wei Yu
Forests 2024, 15(12), 2196; https://doi.org/10.3390/f15122196 - 13 Dec 2024
Cited by 4 | Viewed by 1148
Abstract
To improve the intelligence level of particleboard inspection lines, machine vision and artificial intelligence technologies are combined to replace manual inspection with automatic detection. Aiming at the problem of missed detection and false detection on small defects due to the large surface width, [...] Read more.
To improve the intelligence level of particleboard inspection lines, machine vision and artificial intelligence technologies are combined to replace manual inspection with automatic detection. Aiming at the problem of missed detection and false detection on small defects due to the large surface width, complex texture and different surface defect shapes of particleboard, this paper introduces image super-resolution technology and proposes a super-resolution reconstruction model for particleboard images. Based on the Transformer network, this model incorporates an improved SRResNet (Super-Resolution Residual Network) backbone network in the deep feature extraction module to extract deep texture information. The shallow features extracted by conv 3 × 3 are then fused with features extracted by the Transformer, considering both local texture features and global feature information. This enhances image quality and makes defect details clearer. Through comparison with the traditional bicubic B-spline interpolation method, ESRGAN (Enhanced Super-Resolution Generative Adversarial Network), and SwinIR (Image Restoration Using Swin Transformer), the effectiveness of the particleboard super-resolution reconstruction model is verified using objective evaluation metrics including PSNR, SSIM, and LPIPS, demonstrating its ability to produce higher-quality images with more details and better visual characteristics. Finally, using the YOLOv8 model to compare defect detection rates between super-resolution images and low-resolution images, the mAP can reach 96.5%, which is 25.6% higher than the low-resolution image recognition rate. Full article
(This article belongs to the Section Wood Science and Forest Products)
Show Figures

Figure 1

Back to TopTop