Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (862)

Search Parameters:
Keywords = automatic inspection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 8456 KB  
Article
A Comprehensive Performance Evaluation of YOLO Series Algorithms in Automatic Inspection of Printed Circuit Boards
by Zan Yang, Dan Li, Longhui Hou and Wei Nai
Machines 2026, 14(1), 94; https://doi.org/10.3390/machines14010094 - 13 Jan 2026
Abstract
Considering the rapid iteration of you-only-look-once (YOLO)-series algorithms, this paper aims to provide a data-driven performance spectrum and selection guide for the latest YOLO series algorithm (YOLOv8 to YOLOv13) in printed circuit board (PCB) automatic optical inspection (AOI) through systematic benchmarking. A comprehensive [...] Read more.
Considering the rapid iteration of you-only-look-once (YOLO)-series algorithms, this paper aims to provide a data-driven performance spectrum and selection guide for the latest YOLO series algorithm (YOLOv8 to YOLOv13) in printed circuit board (PCB) automatic optical inspection (AOI) through systematic benchmarking. A comprehensive evaluation of the six state-of-the-art YOLO series algorithms is conducted on a standardized dataset containing six typical PCB defects: missing hole, mouse bite, open circuit, short circuit, spur, and spurious copper. An innovative dual-cycle comparative experiment (100 rounds and 500 rounds) is designed, and a systematic assessment is performed across multiple dimensions, including accuracy, efficiency, and inference speed. The experimental results have revealed significant variations in algorithm performance with training cycles: under short-term training (100 rounds), YOLOv13 achieves leading detection performance (mAP50 = 0.924, mAP50-95 = 0.484) with the fewest parameters (2.45 million); after full training (500 rounds), YOLOv10 achieves the highest overall accuracy (mAP50 = 0.946, mAP50-95 = 0.526); additionally, YOLOv11 shows the optimal speed-accuracy balance after long-term training, while YOLOv12 excels in short-term training; moreover, “open circuit” and “spur” are evaluated as the most challenging defect categories to detect. The findings given in this paper indicate the absence of a universally applicable “all-in-one” algorithm and propose a clear algorithm selection roadmap: YOLOv10 is recommended for offline analysis scenarios prioritizing extreme accuracy; YOLOv13 is the top choice for applications requiring rapid iteration with tight training time constraints; and YOLOv11 is the best option for high-throughput online inspection PCB production lines. Full article
(This article belongs to the Section Machines Testing and Maintenance)
55 pages, 1599 KB  
Review
The Survey of Evolutionary Deep Learning-Based UAV Intelligent Power Inspection
by Shanshan Fan and Bin Cao
Drones 2026, 10(1), 55; https://doi.org/10.3390/drones10010055 - 12 Jan 2026
Abstract
With the rapid development of the power Internet of Things (IoT), the traditional manual inspection mode can no longer meet the growing demand for power equipment inspection. Unmanned aerial vehicle (UAV) intelligent inspection technology, with its efficient and flexible features, has become the [...] Read more.
With the rapid development of the power Internet of Things (IoT), the traditional manual inspection mode can no longer meet the growing demand for power equipment inspection. Unmanned aerial vehicle (UAV) intelligent inspection technology, with its efficient and flexible features, has become the mainstream solution. The rapid development of computer vision and deep learning (DL) has significantly improved the accuracy and efficiency of UAV intelligent inspection systems for power equipment. However, mainstream deep learning models have complex structures, and manual design is time-consuming and labor-intensive. In addition, the images collected during the power inspection process by UAVs have problems such as complex backgrounds, uneven lighting, and significant differences in object sizes, which require expert DL domain knowledge and many trial-and-error experiments to design models suitable for application scenarios involving power inspection with UAVs. In response to these difficult problems, evolutionary computation (EC) technology has demonstrated unique advantages in simulating the natural evolutionary process. This technology can independently design lightweight and high-precision deep learning models by automatically optimizing the network structure and hyperparameters. Therefore, this review summarizes the development of evolutionary deep learning (EDL) technology and provides a reference for applying EDL in object detection models used in UAV intelligent power inspection systems. First, the application status of DL-based object detection models in power inspection is reviewed. Then, how EDL technology improves the performance of the models in challenging scenarios such as complex terrain and extreme weather is analyzed by optimizing the network architecture. Finally, the challenges and future research directions of EDL technology in the field of UAV power inspection are discussed, including key issues such as improving the environmental adaptability of the model and reducing computing energy consumption, providing theoretical references for promoting the development of UAV power inspection technology to a higher level. Full article
Show Figures

Figure 1

16 pages, 4550 KB  
Article
A Framework for a Digital Twin of Inspection Robots
by Cristian Pelagalli, Pierluigi Rea, Roberto Di Bona, Erika Ottaviano, Marek Kciuk, Zygmunt Kowalik and Joanna Bijak
Appl. Sci. 2026, 16(2), 650; https://doi.org/10.3390/app16020650 - 8 Jan 2026
Viewed by 95
Abstract
The study addresses the design and implementation of a modular, scalable platform for specialized inspection tasks, highlighting its suitability for future research activities. The work presents a fully validated methodology that encompasses both the physical robot and its digital twin. Specifically, the objective [...] Read more.
The study addresses the design and implementation of a modular, scalable platform for specialized inspection tasks, highlighting its suitability for future research activities. The work presents a fully validated methodology that encompasses both the physical robot and its digital twin. Specifically, the objective of this work is to design and develop a sensor-equipped mobile robot designed for inspection and surveillance tasks. The study places particular emphasis on the robot’s actuation system, the design and implementation of its control architecture, and the creation of a PC-based control interface. Additionally, suitable sensors can be integrated to enable future capabilities in automatic obstacle detection and autonomous navigation. The paper presents a digital shadow/DT-enabling framework to support inspection and surveillance operations, grounded in the digital representation of the robot. Full article
Show Figures

Figure 1

22 pages, 1366 KB  
Systematic Review
Inspection and Evaluation of Urban Pavement Deterioration Using Drones: Review of Methods, Challenges, and Future Trends
by Pablo Julián López-González, David Reyes-González, Oscar Moreno-Vázquez, Rodrigo Vivar-Ocampo, Sergio Aurelio Zamora-Castro, Lorena del Carmen Santos Cortés, Brenda Suemy Trujillo-García and Joaquín Sangabriel-Lomelí
Future Transp. 2026, 6(1), 10; https://doi.org/10.3390/futuretransp6010010 - 4 Jan 2026
Viewed by 206
Abstract
The rapid growth of urban areas has increased the need for more efficient methods of pavement inspection and maintenance. However, conventional techniques remain slow, labor-intensive, and limited in spatial coverage, and their performance is strongly affected by traffic, weather conditions, and operational constraints. [...] Read more.
The rapid growth of urban areas has increased the need for more efficient methods of pavement inspection and maintenance. However, conventional techniques remain slow, labor-intensive, and limited in spatial coverage, and their performance is strongly affected by traffic, weather conditions, and operational constraints. In response to these challenges, it is essential to synthesize the technological advances that improve inspection efficiency, coverage, and data quality compared to traditional approaches. Herein, we present a systematic review of the state of the art on the use of unmanned aerial vehicles (UAVs) for monitoring and assessing pavement deterioration, highlighting as a key contribution the comparative integration of sensors (photogrammetry, LiDAR, and thermography) with recent automatic damage-detection algorithms. A structured review methodology was applied, including the search, selection, and critical analysis of specialized studies on UAV-based pavement evaluation. The results indicate that UAV photogrammetry can achieve sub-centimeter accuracy (<1 cm) in 3D reconstructions, LiDAR systems can improve deformation detection by up to 35%, and AI-based algorithms can increase crack-identification accuracy by 10% to 25% compared with manual methods. Finally, the synthesis shows that multi-sensor integration and digital twins offer strong potential to enhance predictive maintenance and support the transition towards smarter and more sustainable urban infrastructure management strategies. Full article
Show Figures

Figure 1

20 pages, 1788 KB  
Systematic Review
Deep Learning Algorithms for Defect Detection on Electronic Assemblies: A Systematic Literature Review
by Bernardo Montoya Magaña, Óscar Hernández-Uribe, Leonor Adriana Cárdenas-Robledo and Jose Antonio Cantoral-Ceballos
Mach. Learn. Knowl. Extr. 2026, 8(1), 5; https://doi.org/10.3390/make8010005 - 27 Dec 2025
Viewed by 500
Abstract
The electronic manufacturing industry is relying on automatic and rapid defect inspection of printed circuit boards (PCBs). Two main challenges hinder the accuracy and real-time defect detection: the growing density of electronic component placement and their size reduction, complicating the identification of tiny [...] Read more.
The electronic manufacturing industry is relying on automatic and rapid defect inspection of printed circuit boards (PCBs). Two main challenges hinder the accuracy and real-time defect detection: the growing density of electronic component placement and their size reduction, complicating the identification of tiny defects. This systematic review encompasses 56 relevant articles from the Scopus database between 2015 and the first quarter of 2025. This study examines deep learning (DL) architectures and machine learning (ML) algorithms for defect detection in PCB manufacturing. Findings indicate that 78.6% of the articles used models capable of detecting up to six defect types, and 62.5% relied on custom-made datasets. Convolutional neural networks (CNNs) are commonly utilized architectures due to their flexibility and adaptability to a variety of tasks. Still, real-time defect detection remains a challenge because of the complexity and high throughput in production settings. Likewise, accessible datasets are essential for the electronics industry to achieve broad adoption. Hence, architectures capable of learning and optimizing directly in the production line from unlabeled PCB data, without prior training, are necessary. Full article
Show Figures

Figure 1

19 pages, 5799 KB  
Article
An Improved Single-Stage Object Detection Model and Its Application to Oil Seal Defect Detection
by Yangzhuo Chen, Yuhang Wu, Xiaoliang Wu, Weiwei He, Guangtian He and Xiaowen Cai
Electronics 2026, 15(1), 128; https://doi.org/10.3390/electronics15010128 - 26 Dec 2025
Viewed by 243
Abstract
Oil seals, as core industrial components, often exhibit defects with sparse features and low contrast, posing significant challenges for traditional vision-based inspection methods. Although deep learning facilitates automatic feature extraction for defect detection, many instance segmentation models are computationally expensive, hindering their deployment [...] Read more.
Oil seals, as core industrial components, often exhibit defects with sparse features and low contrast, posing significant challenges for traditional vision-based inspection methods. Although deep learning facilitates automatic feature extraction for defect detection, many instance segmentation models are computationally expensive, hindering their deployment in real-time edge applications. In this paper, we present an efficient oil seal defect detection model based on an enhanced YOLOv11n architecture (YOLOv11n_CDK). The proposed approach introduces several dynamic convolution variants and integrates the Kolmogorov–Arnold Network (KAN) into the backbone. A newly designed parallel module, the nested asynchronous pooling convolutional module (NAPConv), is also incorporated to form a lightweight yet powerful feature extraction network. Experimental results demonstrate that, compared to the baseline YOLOv11n, our model reduces computational cost by 4.76% and increases mAP@0.5 by 2.14%. When deployed on a Jetson Nano embedded device, the model achieves an average processing time of 6.3 ms per image, corresponding to a frame rate of 105–110 FPS. These outcomes highlight the model’s strong potential for high-performance, real-time industrial deployment, effectively balancing detection accuracy with low computational complexity. Full article
Show Figures

Figure 1

23 pages, 5850 KB  
Article
Durability Assessment of Marine Steel-Reinforced Concrete Using Machine Vision: A Case Study on Corrosion Damage and Geometric Deformation in Shield Tunnels
by Yanzhi Qi, Xipeng Wang, Zhi Ding and Yaozhi Luo
Buildings 2026, 16(1), 107; https://doi.org/10.3390/buildings16010107 - 25 Dec 2025
Viewed by 160
Abstract
The rapid urbanization of coastal regions has intensified the demand for durable underground infrastructure like shield tunnels, where reinforced concrete (RC) structures are critical yet susceptible to long-term degradation in marine environments. This study develops an integrated machine vision-based framework for assessing the [...] Read more.
The rapid urbanization of coastal regions has intensified the demand for durable underground infrastructure like shield tunnels, where reinforced concrete (RC) structures are critical yet susceptible to long-term degradation in marine environments. This study develops an integrated machine vision-based framework for assessing the long-term durability of RC in marine shield tunnels by synergistically combining point cloud analysis and deep learning-based damage recognition. The methodology involves preprocessing tunnel point clouds to extract the centerline and cross-sections, enabling the quantification of geometric deformations, including segment misalignment and elliptical distortion. Concurrently, an advanced YOLOv8 model is employed to automatically identify and classify surface corrosion damages—specifically water leakage, cracks, and spalling—from images, achieving high detection accuracies (e.g., 95.6% for leakage). By fusing the geometric indicators with damage metrics, a quantitative risk scoring system is established to evaluate structural durability. Experimental results on a real-world tunnel segment demonstrate the framework’s effectiveness in correlating surface defects with underlying geometric irregularities. This integrated approach offers a data-driven solution for the continuous health monitoring and residual life prediction of RC tunnel linings in marine conditions, bridging the gap between visual inspection and structural performance assessment. Full article
Show Figures

Figure 1

17 pages, 1344 KB  
Article
Lightweight Deep Learning Model for Classification of Normal and Abnormal Vasculature in Organoid Images
by Eunsu Yun, Jongweon Kim and Daesik Jeong
Sensors 2026, 26(1), 112; https://doi.org/10.3390/s26010112 - 24 Dec 2025
Viewed by 320
Abstract
Human organoids are 3D cell culture models that precisely replicate the microenvironment of real organs. In organoid-based experiments, assessing whether the internal vasculature has formed normally is essential for ensuring the reliability of experimental results. However, conventional vasculature assessment relies on manual inspection [...] Read more.
Human organoids are 3D cell culture models that precisely replicate the microenvironment of real organs. In organoid-based experiments, assessing whether the internal vasculature has formed normally is essential for ensuring the reliability of experimental results. However, conventional vasculature assessment relies on manual inspection by researchers, which is time-consuming and prone to variability caused by subjective judgment. This study proposes a lightweight deep learning model for automatic classification of normal and abnormal vasculature in vascular organoid images. The proposed model is based on EfficientNet by replacing the activation function SiLU with ReLU and removing the Squeeze-and-Excitation (SE) blocks to reduce computational complexity. The dataset consisted of vascular organoid images obtained from co-culture experiments. Data augmentation and noise addition were performed to alleviate class imbalance. Experimental results show that the proposed Modified 3 models (B0, B1, B2) achieved accuracy of 0.90, 0.99, and 1.00, respectively, with corresponding inference speed of 51.1, 36.0, and 32.4 FPS on the CPU, demonstrating real-time inference capability and an average speed improvement of 70% compared to the original models. This study presents an efficient automated analysis framework that enables quantitative and reproducible vasculature assessment by introducing a lightweight model that maintains high accuracy and supports real-time processing. Full article
Show Figures

Figure 1

8 pages, 2995 KB  
Proceeding Paper
Early Detection, Evaluation and Continuous Monitoring of Hydrogen-Induced Cracking in Oil and Gas Vessels
by Dimitrios Kourousis, Dimitrios Papasalouros and Athanasios Anastasopoulos
Eng. Proc. 2025, 119(1), 22; https://doi.org/10.3390/engproc2025119022 - 15 Dec 2025
Viewed by 188
Abstract
Hydrogen-Induced Cracking (HIC) and Stress-Oriented Hydrogen-Induced Cracking (SOHIC) pose a significant threat to the integrity of steel components in wet H2S service within the refining industry. This paper presents an integrated approach to HIC and SOHIC management, combining state-of-the-art inspection and [...] Read more.
Hydrogen-Induced Cracking (HIC) and Stress-Oriented Hydrogen-Induced Cracking (SOHIC) pose a significant threat to the integrity of steel components in wet H2S service within the refining industry. This paper presents an integrated approach to HIC and SOHIC management, combining state-of-the-art inspection and monitoring techniques leading to Fitness-for-Service (FFS) assessments. Specifically, it details the synergistic application of Phased Array Ultrasonic Testing (PAUT) for precise crack detection and characterization and Acoustic Emission (AE) monitoring for real-time damage evolution insights. A key innovation is the development of in-house software capable of automatically clustering HIC and SOHIC PAUT data. An illustrative case study, referring to six months of continuous operational data and advanced re-analysis, demonstrates the practical application and benefits of this approach for predictive maintenance. To the authors’ knowledge, this constitutes the first continuous in-service correlation between real-time AE activity and PAUT-sized HIC damage, providing geometric input for API 579/ASME FFS-1 assessments, minimizing downtime, and mitigating failure risks. Full article
Show Figures

Figure 1

15 pages, 1730 KB  
Article
Research on Printed Circuit Board (PCB) Defect Detection Algorithm Based on Convolutional Neural Networks (CNN)
by Zhiduan Ni and Yeonhee Kim
Appl. Sci. 2025, 15(24), 13115; https://doi.org/10.3390/app152413115 - 12 Dec 2025
Viewed by 935
Abstract
Printed Circuit Board (PCB) defect detection is critical for quality control in electronics manufacturing. Traditional manual inspection and classical Automated Optical Inspection (AOI) methods face challenges in speed, consistency, and flexibility. This paper proposes a CNN-based approach for automatic PCB defect detection using [...] Read more.
Printed Circuit Board (PCB) defect detection is critical for quality control in electronics manufacturing. Traditional manual inspection and classical Automated Optical Inspection (AOI) methods face challenges in speed, consistency, and flexibility. This paper proposes a CNN-based approach for automatic PCB defect detection using the YOLOv5 model. The method leverages a Convolutional Neural Network to identify various PCB defect types (e.g., open circuits, short circuits, and missing holes) from board images. In this study, a model was trained on a PCB image dataset with detailed annotations. Data augmentation techniques, such as sharpening and noise filtering, were applied to improve robustness. The experimental results showed that the proposed approach could locate and classify multiple defect types on PCBs, with overall detection precision and recall above 90% and 91%, respectively, enabling reliable automated inspection. A brief comparison with the latest YOLOv8 model is also presented, showing that the proposed CNN-based detector offers competitive performance. This study shows that deep learning-based defect detection can improve the PCB inspection efficiency and accuracy significantly, paving the way for intelligent manufacturing and quality assurance in PCB production. From a sensing perspective, we frame the system around an industrial RGB camera and controlled illumination, emphasizing how imaging-sensor choices and settings shape defect visibility and model robustness, and sketching future sensor-fusion directions. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Image Processing)
Show Figures

Figure 1

41 pages, 8287 KB  
Article
Smart Image-Based Deep Learning System for Automated Quality Grading of Phalaenopsis Seedlings in Outsourced Production
by Hong-Dar Lin, Zheng-Yuan Zhang and Chou-Hsien Lin
Sensors 2025, 25(24), 7502; https://doi.org/10.3390/s25247502 - 10 Dec 2025
Viewed by 449
Abstract
Phalaenopsis orchids are one of Taiwan’s key floral export products, and maintaining consistent quality is crucial for international competitiveness. To improve production efficiency, many orchid farms outsource the early flask seedling stage to contract growers, who raise the plants to the 2.5-inch potted [...] Read more.
Phalaenopsis orchids are one of Taiwan’s key floral export products, and maintaining consistent quality is crucial for international competitiveness. To improve production efficiency, many orchid farms outsource the early flask seedling stage to contract growers, who raise the plants to the 2.5-inch potted seedling stage before returning them for further greenhouse cultivation. Traditionally, the quality of these outsourced seedlings is evaluated manually by inspectors who visually detect defects and assign quality grades based on experience, a process that is time-consuming and subjective. This study introduces a smart image-based deep learning system for automatic quality grading of Phalaenopsis potted seedlings, combining computer vision, deep learning, and machine learning techniques to replace manual inspection. The system uses YOLOv8 and YOLOv10 models for defect and root detection, along with SVM and Random Forest classifiers for defect counting and grading. It employs a dual-view imaging approach, utilizing top-view RGB-D images to capture spatial leaf structures and multi-angle side-view RGB images to assess leaf and root conditions. Two grading strategies are developed: a three-stage hierarchical method that offers interpretable diagnostic results and a direct grading method for fast, end-to-end quality prediction. Performance comparisons and ablation studies show that using RGB-D top-view images and optimal viewing-angle combinations significantly improve grading accuracy. The system achieves F1-scores of 84.44% (three-stage) and 90.44% (direct), demonstrating high reliability and strong potential for automated quality assessment and export inspection in the orchid industry. Full article
(This article belongs to the Special Issue Sensing and Imaging for Defect Detection: 2nd Edition)
Show Figures

Figure 1

19 pages, 2362 KB  
Article
TCQI-YOLOv5: A Terminal Crimping Quality Defect Detection Network
by Yingjuan Yu, Dawei Ren and Lingwei Meng
Sensors 2025, 25(24), 7498; https://doi.org/10.3390/s25247498 - 10 Dec 2025
Viewed by 413
Abstract
With the rapid development of the automotive industry, terminals—as critical components of wiring harnesses—play a pivotal role in ensuring the reliability and stability of signal transmission. At present, terminal crimping quality inspection (TCQI) primarily relies on manual visual examination, which suffers from low [...] Read more.
With the rapid development of the automotive industry, terminals—as critical components of wiring harnesses—play a pivotal role in ensuring the reliability and stability of signal transmission. At present, terminal crimping quality inspection (TCQI) primarily relies on manual visual examination, which suffers from low efficiency, high labor intensity, and susceptibility to missed detections. To address these challenges, this study proposes an improved YOLOv5-based model, TCQI-YOLOv5, designed to achieve efficient and accurate automatic detection of terminal crimping quality. In the feature extraction module, the model integrates the C2f structure, FasterNet module, and Efficient Multi-scale Attention (EMA) attention mechanism, enhancing its capability to identify small targets and subtle defects. Moreover, the SIOU loss function is employed to replace the traditional IOU, thereby improving the localization accuracy of predicted bounding boxes. Experimental results demonstrate that TCQI-YOLOv5 significantly improves recognition ccuracy for difficult-to-detect defects such as shallow insulation crimps, achieving a mean average precision (mAP) of 98.3%, outperforming comparative models. Furthermore, the detection speed meets the requirements of real-time industrial applications, indicating strong potential for practical deployment. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

26 pages, 2000 KB  
Article
Think-to-Detect: Rationale-Driven Vision–Language Anomaly Detection
by Mahmoud Abdalla, Mahmoud SalahEldin Kasem, Mohamed Mahmoud, Mostafa Farouk Senussi, Abdelrahman Abdallah and Hyun-Soo Kang
Mathematics 2025, 13(24), 3920; https://doi.org/10.3390/math13243920 - 8 Dec 2025
Viewed by 780
Abstract
Large vision–language models (VLMs) can describe images fluently, yet their anomaly decisions often rely on opaque heuristics and manual thresholds. We present ThinkAnomaly, a rationale-first vision–language framework for industrial anomaly detection. The model generates a concise structured rationale and then issues a [...] Read more.
Large vision–language models (VLMs) can describe images fluently, yet their anomaly decisions often rely on opaque heuristics and manual thresholds. We present ThinkAnomaly, a rationale-first vision–language framework for industrial anomaly detection. The model generates a concise structured rationale and then issues a calibrated yes/no decision, eliminating per-class thresholds. To supervise reasoning, we construct chain-of-thought annotations for MVTec-AD and VisA via synthesis, automatic filtering, and human validation. We fine-tune Llama-3.2-Vision with a two-stage objective and a rationale–label consistency loss, yielding state-of-the-art classification accuracy while maintaining a competitive detection AUC: MVTec-AD—93.9% accuracy and 93.8 Image-AUC; VisA—90.3% accuracy and 85.0 Image-AUC. This improves classification accuracy over AnomalyGPT by +7.8 (MVTec-AD) and +12.9 (VisA) percentage points. The explicit reasoning and calibrated decisions make ThinkAnomaly transparent and deployment-ready for industrial inspection. Full article
Show Figures

Figure 1

18 pages, 4528 KB  
Article
Robust Rotation Estimation Using Adaptive ROI Radon Transformation for Sonar Images
by Hyeonmin Sim, Horyeol Choi and Hangil Joe
J. Mar. Sci. Eng. 2025, 13(12), 2321; https://doi.org/10.3390/jmse13122321 - 6 Dec 2025
Viewed by 303
Abstract
Recent advances in forward-looking sonar (FLS) have enabled the acquisition of high-resolution acoustic images. However, the accuracy of image-based rotation estimation remains limited owing to speckle noise, perceptual ambiguity, and shadows. In recent years, object-based path reconstruction has become increasingly important for underwater [...] Read more.
Recent advances in forward-looking sonar (FLS) have enabled the acquisition of high-resolution acoustic images. However, the accuracy of image-based rotation estimation remains limited owing to speckle noise, perceptual ambiguity, and shadows. In recent years, object-based path reconstruction has become increasingly important for underwater inspection tasks, and in such scenarios, reliably estimating rotation from static seabed objects is essential for ensuring the robustness of autonomous underwater vehicle (AUV) missions. Accordingly, we present a rotation estimation method that adaptively extracts a region of interest (ROI) and applies the Radon transform. The proposed approach automatically selects sonar image regions containing objects and emphasizes high projection values in the resulting sinogram. By computing the shift between the high projection values of two sinograms, the method achieves robust rotation estimation even under low contrast and severe speckle noise. Experimental results demonstrate that our method consistently achieves lower estimation errors than existing approaches, particularly in scenarios involving static seabed objects. These findings highlight its practical value for object-based path reconstruction, high-precision mapping, and other underwater navigation tasks. Full article
(This article belongs to the Special Issue Advances in Underwater Positioning and Navigation Technology)
Show Figures

Figure 1

23 pages, 5900 KB  
Article
A Transformer-Based Low-Light Enhancement Algorithm for Rock Bolt Detection in Low-Light Underground Mine Environments
by Wenzhen Yan, Fuming Qu, Yingzhen Wang, Jiajun Xu, Jiapan Li and Lingyu Zhao
Processes 2025, 13(12), 3914; https://doi.org/10.3390/pr13123914 - 3 Dec 2025
Viewed by 391
Abstract
Underground roadway support is a critical component for ensuring safety in mining operations. In recent years, with the rapid advancement of intelligent technologies, computer vision-based automatic rock bolt detection methods have emerged as a promising alternative to traditional manual inspection. However, the underground [...] Read more.
Underground roadway support is a critical component for ensuring safety in mining operations. In recent years, with the rapid advancement of intelligent technologies, computer vision-based automatic rock bolt detection methods have emerged as a promising alternative to traditional manual inspection. However, the underground mining environment inherently suffers from severely insufficient lighting. Images captured on-site often exhibit problems such as low overall brightness, blurred local details, and severe color distortion. To address the problem, this study proposed a novel low-light image enhancement algorithm, PromptHDR. Based on Transformer architecture, the algorithm effectively suppresses color distortion caused by non-uniform illumination through a Lighting Extraction Module, while simultaneously introducing a Prompt block incorporating a Mamba mechanism to enhance the model’s contextual understanding of the roadway scene and its ability to preserve rock bolt details. Quantitative results demonstrate that the PromptHDR algorithm achieves Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) index scores of 24.19 dB and 0.839, respectively. Furthermore, the enhanced images exhibit more natural visual appearance, adequate brightness recovery, and well-preserved detailed information, establishing a reliable visual foundation for the accurate identification of rock bolts. Full article
(This article belongs to the Special Issue Sustainable and Advanced Technologies for Mining Engineering)
Show Figures

Figure 1

Back to TopTop