Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (42)

Search Parameters:
Keywords = stripe image enhancement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2596 KB  
Article
Integrating RGB Image Processing and Random Forest Algorithm to Estimate Stripe Rust Disease Severity in Wheat
by Andrzej Wójtowicz, Jan Piekarczyk, Marek Wójtowicz, Sławomir Królewicz, Ilona Świerczyńska, Katarzyna Pieczul, Jarosław Jasiewicz and Jakub Ceglarek
Remote Sens. 2025, 17(17), 2981; https://doi.org/10.3390/rs17172981 - 27 Aug 2025
Viewed by 337
Abstract
Accurate and timely assessment of crop disease severity is crucial for effective management strategies and ensuring sustainable agricultural production. Traditional visual disease scoring methods are subjective and labor-intensive, highlighting the need for automated, objective alternatives. This study evaluates the effectiveness of a model [...] Read more.
Accurate and timely assessment of crop disease severity is crucial for effective management strategies and ensuring sustainable agricultural production. Traditional visual disease scoring methods are subjective and labor-intensive, highlighting the need for automated, objective alternatives. This study evaluates the effectiveness of a model for field-based identification and quantification of stripe rust severity in wheat using red, green, blue RGB imaging. Based on crop reflectance hyperspectra (CRHS) acquired using a FieldSpec ASD spectroradiometer, two complementary approaches were developed. In the first approach, we estimate single leaf disease severity (LDS) under laboratory conditions, while in the second approach, we assess crop disease severity (CDS) from field-based RGB images. The high accuracy of both methods enabled the development of a predictive model for estimating LDS from CDS, offering a scalable solution for precision disease monitoring in wheat cultivation. The experiment was conducted on four winter wheat plots subjected to varying fungicide treatments to induce different levels of stripe rust severity for model calibration, with treatment regimes ranging from no application to three applications during the growing season. RGB images were acquired in both laboratory conditions (individual leaves) and field conditions (nadir and oblique perspectives), complemented by hyperspectral measurements in the 350–2500 nm range. To achieve automated and objective assessment of disease severity, we developed custom image-processing scripts and applied Random Forest classification and regression models. The models demonstrated high predictive performance, with the combined use of nadir and oblique RGB imagery achieving the highest classification accuracy (97.87%), sensitivity (100%), and specificity (95.83%). Oblique images were more sensitive to early-stage infection, while nadir images offered greater specificity. Spectral feature selection revealed that wavelengths in the visible (e.g., 508–563 nm and 621–703 nm) and red-edge/SWIR regions (around 1556–1767 nm) were particularly informative for disease detection. In classification models, shorter wavelengths from the visible range proved to be more useful, while in regression models, longer wavelengths were more effective. The integration of RGB-based image analysis with the Random Forest algorithm provides a robust, scalable, and cost-effective solution for monitoring stripe rust severity under field conditions. This approach holds significant potential for enhancing precision agriculture strategies by enabling early intervention and optimized fungicide application. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

14 pages, 1390 KB  
Article
Late Gadolinium Enhancement Variation in Asymptomatic Individuals: Comparison with Dilated Cardiomyopathy
by Seoyeon Park, Soo Jin Cho, Sung Mok Kim, Moon Young Kim and Yeon Hyeon Choe
J. Cardiovasc. Dev. Dis. 2025, 12(8), 312; https://doi.org/10.3390/jcdd12080312 - 18 Aug 2025
Viewed by 348
Abstract
Late gadolinium enhancements (LGEs) appear in asymptomatic individuals as septal stripes, which mimic abnormal LGEs in patients with dilated cardiomyopathy (DCM). We aimed to evaluate the frequency and extent of LGE variation in asymptomatic individuals and to compare it with those of DCM [...] Read more.
Late gadolinium enhancements (LGEs) appear in asymptomatic individuals as septal stripes, which mimic abnormal LGEs in patients with dilated cardiomyopathy (DCM). We aimed to evaluate the frequency and extent of LGE variation in asymptomatic individuals and to compare it with those of DCM group. This retrospective study included asymptomatic and DCM groups who underwent CMR imaging. LGE was defined as a myocardial signal intensity higher than five standard-deviations of normal myocardium. LGE was evaluated in right ventricular insertion points (RVIPs) and mid-interventricular septum. A total of 273 asymptomatic individuals (age, 54.3 ± 5.8 years, 209 males) and 100 patients with DCM (age, 55.3 ± 4.9 years, 73 males) were included. LGE was observed in 99.3% of asymptomatic and 100% of DCM groups. The average number of myocardial segments with LGE was distinguishable between asymptomatic and DCM groups (5.5 ± 1.7 vs. 7.6 ± 2.2; p < 0.001). The thickness of LGE differed between two groups in mid-septum (4.5 ± 1.3 mm vs. 5.7 ± 1.8 mm; p < 0.001), upper RVIP (6.1 ± 1.9 mm vs. 8.7 ± 2.7 mm; p < 0.001), and lower RVIP (6.4 ± 2.3 mm vs. 8.6 ± 2.8 mm; p < 0.001). Considerable overlap was observed in LGE between asymptomatic and DCM groups despite different LGE characteristics between them. LGEs within normal range should not be interpreted as abnormal findings in the evaluation of myocardial diseases including DCM. Full article
(This article belongs to the Special Issue Cardiovascular Magnetic Resonance in Cardiology Practice: 2nd Edition)
Show Figures

Figure 1

19 pages, 3130 KB  
Article
Deep Learning-Based Instance Segmentation of Galloping High-Speed Railway Overhead Contact System Conductors in Video Images
by Xiaotong Yao, Huayu Yuan, Shanpeng Zhao, Wei Tian, Dongzhao Han, Xiaoping Li, Feng Wang and Sihua Wang
Sensors 2025, 25(15), 4714; https://doi.org/10.3390/s25154714 - 30 Jul 2025
Viewed by 377
Abstract
The conductors of high-speed railway OCSs (Overhead Contact Systems) are susceptible to conductor galloping due to the impact of natural elements such as strong winds, rain, and snow, resulting in conductor fatigue damage and significantly compromising train operational safety. Consequently, monitoring the galloping [...] Read more.
The conductors of high-speed railway OCSs (Overhead Contact Systems) are susceptible to conductor galloping due to the impact of natural elements such as strong winds, rain, and snow, resulting in conductor fatigue damage and significantly compromising train operational safety. Consequently, monitoring the galloping status of conductors is crucial, and instance segmentation techniques, by delineating the pixel-level contours of each conductor, can significantly aid in the identification and study of galloping phenomena. This work expands upon the YOLO11-seg model and introduces an instance segmentation approach for galloping video and image sensor data of OCS conductors. The algorithm, designed for the stripe-like distribution of OCS conductors in the data, employs four-direction Sobel filters to extract edge features in horizontal, vertical, and diagonal orientations. These features are subsequently integrated with the original convolutional branch to form the FDSE (Four Direction Sobel Enhancement) module. It integrates the ECA (Efficient Channel Attention) mechanism for the adaptive augmentation of conductor characteristics and utilizes the FL (Focal Loss) function to mitigate the class-imbalance issue between positive and negative samples, hence enhancing the model’s sensitivity to conductors. Consequently, segmentation outcomes from neighboring frames are utilized, and mask-difference analysis is performed to autonomously detect conductor galloping locations, emphasizing their contours for the clear depiction of galloping characteristics. Experimental results demonstrate that the enhanced YOLO11-seg model achieves 85.38% precision, 77.30% recall, 84.25% AP@0.5, 81.14% F1-score, and a real-time processing speed of 44.78 FPS. When combined with the galloping visualization module, it can issue real-time alerts of conductor galloping anomalies, providing robust technical support for railway OCS safety monitoring. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

17 pages, 6432 KB  
Article
Intelligent Battery-Designed System for Edge-Computing-Based Farmland Pest Monitoring System
by Chung-Wen Hung, Chun-Chieh Wang, Zheng-Jie Liao, Yu-Hsing Su and Chun-Liang Liu
Electronics 2025, 14(15), 2927; https://doi.org/10.3390/electronics14152927 - 22 Jul 2025
Viewed by 397
Abstract
Cruciferous vegetables are popular in Asian dishes. However, striped flea beetles prefer to feed on leaves, which can damage the appearance of crops and reduce their economic value. Due to the lack of pest monitoring, the occurrence of pests is often irregular and [...] Read more.
Cruciferous vegetables are popular in Asian dishes. However, striped flea beetles prefer to feed on leaves, which can damage the appearance of crops and reduce their economic value. Due to the lack of pest monitoring, the occurrence of pests is often irregular and unpredictable. Regular and quantitative spraying of pesticides for pest control is an alternative method. Nevertheless, this requires manual execution and is inefficient. This paper presents a system powered by solar energy, utilizing batteries and supercapacitors for energy storage to support the implementation of edge AI devices in outdoor environments. Raspberry Pi is utilized for artificial intelligence image recognition and the Internet of Things (IoT). YOLOv5 is implemented on the edge device, Raspberry Pi, for detecting striped flea beetles, and StyleGAN3 is also utilized for data augmentation in the proposed system. The recognition accuracy reaches 85.4%, and the results are transmitted to the server through a 4G network. The experimental results indicate that the system can operate effectively for an extended period. This system enhances sustainability and reliability and greatly improves the practicality of deploying smart pest detection technology in remote or resource-limited agricultural areas. In subsequent applications, drones can plan routes for pesticide spraying based on the distribution of pests. Full article
(This article belongs to the Special Issue Battery Health Management for Cyber-Physical Energy Storage Systems)
Show Figures

Figure 1

25 pages, 13659 KB  
Article
Adaptive Guided Filtering and Spectral-Entropy-Based Non-Uniformity Correction for High-Resolution Infrared Line-Scan Images
by Mingsheng Huang, Yanghang Zhu, Qingwu Duan, Yaohua Zhu, Jingyu Jiang and Yong Zhang
Sensors 2025, 25(14), 4287; https://doi.org/10.3390/s25144287 - 9 Jul 2025
Viewed by 390
Abstract
Stripe noise along the scanning direction significantly degrades the quality of high-resolution infrared line-scan images and impairs downstream tasks such as target detection and radiometric analysis. This paper presents a lightweight, single-frame, reference-free non-uniformity correction (NUC) method tailored for such images. The proposed [...] Read more.
Stripe noise along the scanning direction significantly degrades the quality of high-resolution infrared line-scan images and impairs downstream tasks such as target detection and radiometric analysis. This paper presents a lightweight, single-frame, reference-free non-uniformity correction (NUC) method tailored for such images. The proposed approach enhances the directionality of stripe noise by projecting the 2D image into a 1D row-mean signal, followed by adaptive guided filtering driven by local median absolute deviation (MAD) to ensure spatial adaptivity and structure preservation. A spectral-entropy-constrained frequency-domain masking strategy is further introduced to suppress periodic and non-periodic interference. Extensive experiments on simulated and real datasets demonstrate that the method consistently outperforms six state-of-the-art algorithms across multiple metrics while maintaining the fastest runtime. The proposed method is highly suitable for real-time deployment in airborne, satellite-based, and embedded infrared imaging systems. It provides a robust and interpretable framework for future infrared enhancement tasks. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

20 pages, 9959 KB  
Article
Compensation of Speckle Noise in 2D Images from Triangulation Laser Profile Sensors Using Local Column Median Vectors with an Application in a Quality Control System
by Paweł Rotter, Dawid Knapik, Maciej Klemiato, Maciej Rosół and Grzegorz Putynkowski
Sensors 2025, 25(11), 3426; https://doi.org/10.3390/s25113426 - 29 May 2025
Cited by 1 | Viewed by 557
Abstract
The main function of triangulation-based laser profile sensors—also referred to as laser profilometers or profilers—is the three-dimensional scanning of moving objects using laser triangulation. In addition to capturing 3D data, these profilometers simultaneously generate grayscale images of the scanned objects. However, the quality [...] Read more.
The main function of triangulation-based laser profile sensors—also referred to as laser profilometers or profilers—is the three-dimensional scanning of moving objects using laser triangulation. In addition to capturing 3D data, these profilometers simultaneously generate grayscale images of the scanned objects. However, the quality of these images is often degraded due to interference of the laser light, manifesting as speckle noise. In profilometer images, this noise typically appears as vertical stripes. Unlike the column fixed pattern noise commonly observed in TDI CMOS cameras, the positions of these stripes are not stationary. Consequently, conventional algorithms for removing fixed pattern noise yield unsatisfactory results when applied to profilometer images. In this article, we propose an effective method for suppressing speckle noise in profilometer images of flat surfaces, based on local column median vectors. The method was evaluated across a variety of surface types and compared against existing approaches using several metrics, including the standard deviation of the column mean vector (SDCMV), frequency spectrum analysis, and standard image quality assessment measures. Our results demonstrate a substantial improvement in reducing column speckle noise: the SDCMV value achieved with our method is 2.5 to 5 times lower than that obtained using global column median values, and the root mean square (RMS) of the frequency spectrum in the noise-relevant region is reduced by nearly an order of magnitude. General image quality metrics also indicate moderate enhancement: peak signal-to-noise ratio (PSNR) increased by 2.12 dB, and the structural similarity index (SSIM) improved from 0.929 to 0.953. The primary limitation of the proposed method is its applicability only to flat surfaces. Nonetheless, we successfully implemented it in an optical inspection system for the furniture industry, where the post-processed image quality was sufficient to detect surface defects as small as 0.1 mm. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 1160 KB  
Article
Real-Time Seam Extraction Using Laser Vision Sensing: Hybrid Approach with Dynamic ROI and Optimized RANSAC
by Guojun Chen, Yanduo Zhang, Yuming Ai, Baocheng Yu and Wenxia Xu
Sensors 2025, 25(11), 3268; https://doi.org/10.3390/s25113268 - 22 May 2025
Viewed by 766
Abstract
Laser vision sensors for weld seam extraction face critical challenges due to arc light and spatter interference in welding environments. This paper presents a real-time weld seam extraction method. The proposed framework enhances robustness through the sequential processing of historical frame data. First, [...] Read more.
Laser vision sensors for weld seam extraction face critical challenges due to arc light and spatter interference in welding environments. This paper presents a real-time weld seam extraction method. The proposed framework enhances robustness through the sequential processing of historical frame data. First, an initial noise-free laser stripe image of the weld seam is acquired prior to arc ignition, from which the laser stripe region and slope characteristics are extracted. Subsequently, during welding, a dynamic region of interest (ROI) is generated for the current frame based on the preceding frame, effectively suppressing spatter and arc interference. Within the ROI, adaptive Otsu thresholding segmentation and morphological filtering are applied to isolate the laser stripe. An optimized RANSAC algorithm, incorporating slope constraints derived from historical frames, is then employed to achieve robust laser stripe fitting. The geometric center coordinates of the weld seam are derived through the rigorous analysis of the optimized laser stripe profile. Experimental results from various types of weld seam extraction validated the accuracy and real-time performance of the proposed method. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

28 pages, 3815 KB  
Article
Collaborative Static-Dynamic Teaching: A Semi-Supervised Framework for Stripe-like Space Target Detection
by Zijian Zhu, Ali Zia, Xuesong Li, Bingbing Dan, Yuebo Ma, Hongfeng Long, Kaili Lu, Enhai Liu and Rujin Zhao
Remote Sens. 2025, 17(8), 1341; https://doi.org/10.3390/rs17081341 - 9 Apr 2025
Cited by 1 | Viewed by 588
Abstract
Stripe-like space target detection (SSTD) plays a crucial role in advancing space situational awareness, enabling missions like satellite navigation and debris monitoring. Existing unsupervised methods often falter in low signal-to-noise ratio (SNR) conditions, while fully supervised approaches require extensive and labor-intensive pixel-level annotations. [...] Read more.
Stripe-like space target detection (SSTD) plays a crucial role in advancing space situational awareness, enabling missions like satellite navigation and debris monitoring. Existing unsupervised methods often falter in low signal-to-noise ratio (SNR) conditions, while fully supervised approaches require extensive and labor-intensive pixel-level annotations. To address these limitations, this paper introduces MRSA-Net, a novel encoder-decoder network specifically designed for SSTD. MRSA-Net incorporates multi-receptive field processing and multi-level feature fusion to effectively extract features of variable and low-SNR stripe-like targets. Building upon this, we propose the Collaborative Static-Dynamic Teaching (CSDT) architecture, a semi-supervised learning architecture that reduces reliance on labeled data by leveraging both static and dynamic teacher models. The framework uses the straight-line prior of stripe-like targets to customize linearity and presents an innovative Adaptive Pseudo-Labeling (APL) strategy, dynamically selecting high-quality pseudo-labels to enhance the student model’s learning process. Extensive experiments on AstroStripeSet and other real-world datasets demonstrate that the CSDT framework achieves state-of-the-art performance in SSTD. Using just 1/16 of the labeled data, CSDT outperforms the second-best Interactive Self-Training Mean Teacher (ISMT) method by 2.64% in mean Intersection over Union (mIoU) and 4.5% in detection rate (Pd), while exhibiting strong generalization in unseen scenarios. This work marks the first application of semi-supervised learning techniques to SSTD, offering a flexible and scalable solution for challenging space imaging tasks. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

24 pages, 5485 KB  
Article
A Machine Learning Algorithm Using Texture Features for Nighttime Cloud Detection from FY-3D MERSI L1 Imagery
by Yilin Li, Yuhao Wu, Jun Li, Anlai Sun, Naiqiang Zhang and Yonglou Liang
Remote Sens. 2025, 17(6), 1083; https://doi.org/10.3390/rs17061083 - 19 Mar 2025
Cited by 1 | Viewed by 633
Abstract
Accurate cloud detection is critical for quantitative applications of satellite-based advanced imager observations, yet nighttime cloud detection presents challenges due to the lack of visible and near-infrared spectral information. Nighttime cloud detection using infrared (IR)-only information needs to be improved. Based on a [...] Read more.
Accurate cloud detection is critical for quantitative applications of satellite-based advanced imager observations, yet nighttime cloud detection presents challenges due to the lack of visible and near-infrared spectral information. Nighttime cloud detection using infrared (IR)-only information needs to be improved. Based on a collocated dataset from Fengyun-3D Medium Resolution Spectral Imager (FY-3D MERSI) Level 1 data and CALIPSO CALIOP lidar Level 2 product, this study proposes a novel framework leveraging Light Gradient-Boosting Machine (LGBM), integrated with grey level co-occurrence matrix (GLCM) features extracted from IR bands, to enhance nighttime cloud detection capabilities. The LGBM model with GLCM features demonstrates significant improvements, achieving an overall accuracy (OA) exceeding 85% and an F1-Score (F1) of nearly 0.9 when validated with an independent CALIOP lidar Level 2 product. Compared to the threshold-based algorithm that has been used operationally, the proposed algorithm exhibits superior and more stable performance across varying solar zenith angles, surface types, and cloud altitudes. Notably, the method produced over 82% OA over the cryosphere surface. Furthermore, compared to LGBM models without GLCM inputs, the enhanced model effectively mitigates the thermal stripe effect of MERSI L1 data, yielding more accurate cloud masks. Further evaluation with collocated MODIS-Aqua cloud mask product indicates that the proposed algorithm delivers more precise cloud detection (OA: 90.30%, F1: 0.9397) compared to that of the MODIS product (OA: 84.66%, F1: 0.9006). This IR-alone algorithm advancement offers a reliable tool for nighttime cloud detection, significantly enhancing the quantitative applications of satellite imager observations. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

17 pages, 7698 KB  
Article
Plant Disease Segmentation Networks for Fast Automatic Severity Estimation Under Natural Field Scenarios
by Chenyi Zhao, Changchun Li, Xin Wang, Xifang Wu, Yongquan Du, Huabin Chai, Taiyi Cai, Hengmao Xiang and Yinghua Jiao
Agriculture 2025, 15(6), 583; https://doi.org/10.3390/agriculture15060583 - 10 Mar 2025
Cited by 1 | Viewed by 1409
Abstract
The segmentation of plant disease images enables researchers to quantify the proportion of disease spots on leaves, known as disease severity. Current deep learning methods predominantly focus on single diseases, simple lesions, or laboratory-controlled environments. In this study, we established and publicly released [...] Read more.
The segmentation of plant disease images enables researchers to quantify the proportion of disease spots on leaves, known as disease severity. Current deep learning methods predominantly focus on single diseases, simple lesions, or laboratory-controlled environments. In this study, we established and publicly released image datasets of field scenarios for three diseases: soybean bacterial blight (SBB), wheat stripe rust (WSR), and cedar apple rust (CAR). We developed Plant Disease Segmentation Networks (PDSNets) based on LinkNet with ResNet-18 as the encoder, including three versions: ×1.0, ×0.75, and ×0.5. The ×1.0 version incorporates a 4 × 4 embedding layer to enhance prediction speed, while versions ×0.75 and ×0.5 are lightweight variants with reduced channel numbers within the same architecture. Their parameter counts are 11.53 M, 6.50 M, and 2.90 M, respectively. PDSNetx0.5 achieved an overall F1 score of 91.96%, an Intersection over Union (IoU) of 85.85% for segmentation, and a coefficient of determination (R2) of 0.908 for severity estimation. On a local central processing unit (CPU), PDSNetx0.5 demonstrated a prediction speed of 34.18 images (640 × 640 pixels) per second, which is 2.66 times faster than LinkNet. Our work provides an efficient and automated approach for assessing plant disease severity in field scenarios. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

16 pages, 11961 KB  
Article
Dual-Encoder UNet-Based Narrowband Uncooled Infrared Imaging Denoising Network
by Minghe Wang, Pan Yuan, Su Qiu, Weiqi Jin, Li Li and Xia Wang
Sensors 2025, 25(5), 1476; https://doi.org/10.3390/s25051476 - 27 Feb 2025
Cited by 3 | Viewed by 1078
Abstract
Uncooled infrared imaging systems have significant potential in industrial hazardous gas leak detection. However, the use of narrowband filters to match gas spectral absorption peaks leads to a low level of incident energy captured by uncooled infrared cameras. This results in a mixture [...] Read more.
Uncooled infrared imaging systems have significant potential in industrial hazardous gas leak detection. However, the use of narrowband filters to match gas spectral absorption peaks leads to a low level of incident energy captured by uncooled infrared cameras. This results in a mixture of fixed pattern noise and Gaussian noise, while existing denoising methods for uncooled infrared images struggle to effectively address this mixed noise, severely hindering the extraction and identification of actual gas leak plumes. This paper presents a UNet-structured dual-encoder denoising network specifically designed for narrowband uncooled infrared images. Based on the distinct characteristics of Gaussian random noise and row–column stripe noise, we developed a basic scale residual attention (BSRA) encoder and an enlarged scale residual attention (ESRA) encoder. These two encoder branches perform noise perception and encoding across different receptive fields, allowing for the fusion of noise features from both scales. The combined features are then input into the decoder for reconstruction, resulting in high-quality infrared images. Experimental results demonstrate that our method effectively denoises composite noise, achieving the best results according to both objective metrics and subjective evaluations. This research method significantly enhances the signal-to-noise ratio of narrowband uncooled infrared images, demonstrating substantial application potential in fields such as industrial hazardous gas detection, remote sensing imaging, and medical imaging. Full article
(This article belongs to the Special Issue Optical Sensors for Industrial Applications)
Show Figures

Figure 1

26 pages, 5624 KB  
Article
Combining Global Features and Local Interoperability Optimization Method for Extracting and Connecting Fine Rivers
by Jian Xu, Xianjun Gao, Zaiai Wang, Guozhong Li, Hualong Luan, Xuejun Cheng, Shiming Yao, Lihua Wang, Sunan Shi, Xiao Xiao and Xudong Xie
Remote Sens. 2025, 17(5), 742; https://doi.org/10.3390/rs17050742 - 20 Feb 2025
Viewed by 637
Abstract
Due to the inherent limitations in remote sensing image quality, seasonal variations, and radiometric inconsistencies, river extraction based on remote sensing image classification often results in omissions. These challenges are particularly pronounced in the detection of narrow and complex river networks, where fine [...] Read more.
Due to the inherent limitations in remote sensing image quality, seasonal variations, and radiometric inconsistencies, river extraction based on remote sensing image classification often results in omissions. These challenges are particularly pronounced in the detection of narrow and complex river networks, where fine river features are frequently underrepresented, leading to fragmented and discontinuous water body extraction. To address these issues and enhance both the completeness and accuracy of fine river identification, this study proposes an advanced fine river extraction and optimization method. Firstly, a linear river feature enhancement algorithm for preliminary optimization is introduced, which combines Frangi filtering with an improved GA-OTSU segmentation technique. By thoroughly analyzing the global features of high-resolution remote sensing images, Frangi filtering is employed to enhance the river linear characteristics. Subsequently, the improved GA-OTSU thresholding algorithm is applied for feature segmentation, yielding the initial results. In the next stage, to preserve the original river topology and ensure stripe continuity, a river skeleton refinement algorithm is utilized to retain critical skeletal information about the river networks. Following this, river endpoints are identified using a connectivity domain labeling algorithm, and the bounding rectangles of potential disconnected regions are delineated. To address discontinuities, river endpoints are shifted and reconnected based on structural similarity index (SSIM) metrics, effectively bridging gaps in the river network. Finally, nonlinear water optimization combined K-means clustering segmentation, topology and spectral inspection, and small-area removal are designed to supplement some missed water bodies and remove some non-water bodies. Experimental results demonstrate that the proposed method significantly improves the regularization and completeness of river extraction, particularly in cases of fine, narrow, and discontinuous river features. The approach ensures more reliable and consistent river delineation, making the extracted results more robust and applicable for practical hydrological and environmental analyses. Full article
Show Figures

Figure 1

17 pages, 7393 KB  
Article
Laser Stripe Centerline Extraction Method for Deep-Hole Inner Surfaces Based on Line-Structured Light Vision Sensing
by Huifu Du, Daguo Yu, Xiaowei Zhao and Ziyang Zhou
Sensors 2025, 25(4), 1113; https://doi.org/10.3390/s25041113 - 12 Feb 2025
Viewed by 1175
Abstract
This paper proposes a point cloud post-processing method based on the minimum spanning tree (MST) and depth-first search (DFS) to extract laser stripe centerlines from the complex inner surfaces of deep holes. Addressing the limitations of traditional image processing methods, which are affected [...] Read more.
This paper proposes a point cloud post-processing method based on the minimum spanning tree (MST) and depth-first search (DFS) to extract laser stripe centerlines from the complex inner surfaces of deep holes. Addressing the limitations of traditional image processing methods, which are affected by burrs and low-frequency random noise, this method utilizes 360° structured light to illuminate the inner wall of the deep hole. A sensor captures laser stripe images, and the Steger algorithm is employed to extract sub-pixel point clouds. Subsequently, an MST is used to construct the point cloud connectivity structure, while DFS is applied for path search and noise removal to enhance extraction accuracy. Experimental results demonstrate that this method significantly improves extraction accuracy, with a dice similarity coefficient (DSC) approaching 1 and a maximum Hausdorff distance (HD) of 3.3821 pixels, outperforming previous methods. This study provides an efficient and reliable solution for the precise extraction of complex laser stripes and lays a solid data foundation for subsequent feature parameter calculations and 3D reconstruction. Full article
Show Figures

Figure 1

20 pages, 7824 KB  
Article
Research on a Feature Point Detection Algorithm for Weld Images Based on Deep Learning
by Shaopeng Kang, Hongbin Qiang, Jing Yang, Kailei Liu, Wenbin Qian, Wenpeng Li and Yanfei Pan
Electronics 2024, 13(20), 4117; https://doi.org/10.3390/electronics13204117 - 18 Oct 2024
Cited by 2 | Viewed by 1944
Abstract
Laser vision seam tracking enhances robotic welding by enabling external information acquisition, thus improving the overall intelligence of the welding process. However, camera images captured during welding often suffer from distortion due to strong noises, including arcs, splashes, and smoke, which adversely affect [...] Read more.
Laser vision seam tracking enhances robotic welding by enabling external information acquisition, thus improving the overall intelligence of the welding process. However, camera images captured during welding often suffer from distortion due to strong noises, including arcs, splashes, and smoke, which adversely affect the accuracy and robustness of feature point detection. To mitigate these issues, we propose a feature point extraction algorithm tailored for weld images, utilizing an improved Deeplabv3+ semantic segmentation network combined with EfficientDet. By replacing Deeplabv3+’s backbone with MobileNetV2, we enhance prediction efficiency. The DenseASPP structure and attention mechanism are implemented to focus on laser stripe edge extraction, resulting in cleaner laser stripe images and minimizing noise interference. Subsequently, EfficientDet extracts feature point positions from these cleaned images. Experimental results demonstrate that, across four typical weld types, the average feature point extraction error is maintained below 1 pixel, with over 99% of errors falling below 3 pixels, indicating both high detection accuracy and reliability. Full article
Show Figures

Figure 1

9 pages, 1460 KB  
Article
Atmospheric Gravity Wave Detection in Low-Light Images: A Transfer Learning Approach
by Beimin Xiao, Shensen Hu, Weihua Ai and Yi Li
Electronics 2024, 13(20), 4030; https://doi.org/10.3390/electronics13204030 - 13 Oct 2024
Cited by 1 | Viewed by 1252
Abstract
Atmospheric gravity waves, as a key fluctuation in the atmosphere, have a significant impact on climate change and weather processes. Traditional observation methods rely on manually identifying and analyzing gravity wave stripe features from satellite images, resulting in a limited number of gravity [...] Read more.
Atmospheric gravity waves, as a key fluctuation in the atmosphere, have a significant impact on climate change and weather processes. Traditional observation methods rely on manually identifying and analyzing gravity wave stripe features from satellite images, resulting in a limited number of gravity wave events for parameter analysis and excitation mechanism studies, which restricts further related research. In this study, we focus on the gravity wave events in the South China Sea region and utilize a one-year low-light satellite dataset processed with wavelet transform noise reduction and light pixel replacement. Furthermore, transfer learning is employed to adapt the Inception V3 model to the classification task of a small-sample dataset, performing the automatic identification of gravity waves in low-light images. By employing sliding window cutting and data enhancement techniques, we further expand the dataset and enhance the generalization ability of the model. We compare the results of transfer learning detection based on the Inception V3 model with the YOLO v10 model, showing that the results of the Inception V3 model are greatly superior to those of the YOLO v10 model. The accuracy on the test dataset is 88.2%. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image and Video Processing)
Show Figures

Figure 1

Back to TopTop