Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (46)

Search Parameters:
Keywords = canny edge detector

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 6554 KiB  
Article
A Deep Learning-Based Algorithm for Ceramic Product Defect Detection
by Junxiang Diao, Hua Wei, Yawei Zhou and Zhihua Diao
Appl. Sci. 2025, 15(12), 6641; https://doi.org/10.3390/app15126641 - 12 Jun 2025
Cited by 1 | Viewed by 446
Abstract
In the field of ceramic product defect detection, traditional manual visual inspection methods suffer from low efficiency and high subjectivity, while existing deep learning algorithms are limited in detection efficiency due to their high complexity. To address these challenges, this study proposes a [...] Read more.
In the field of ceramic product defect detection, traditional manual visual inspection methods suffer from low efficiency and high subjectivity, while existing deep learning algorithms are limited in detection efficiency due to their high complexity. To address these challenges, this study proposes a deep learning-based algorithm for ceramic product defect detection. The algorithm designs a lightweight YOLOv10s detector, which reconstructs the backbone network using GhostNet and incorporates an Efficient Channel Attention (ECA) mechanism fused with depthwise separable convolutions, effectively reducing the model’s complexity and computational load. Additionally, an adaptive threshold method is proposed to improve the traditional Canny edge detection algorithm, significantly enhancing its accuracy in defect edge detection. Experimental results demonstrate that the algorithm achieves an mAP@50 of 92.8% and an F1-score of 90.3% in ceramic product defect detection tasks, accurately identifying and locating four types of defects: cracks, glaze missing, damage, and black spots. In crack detection, the average Edge Localization Error (ELE) is reduced by 25%, the Edge Connectivity Rate (ECR) is increased by 15%, the Weak Edge Responsiveness (WER) is improved by 17%, and the frame rate reaches 40 frames per second (f/s), meeting real-time detection requirements. This algorithm exhibits significant potential in the field of ceramic product defect detection, providing solid technical support for optimizing the ceramic product manufacturing process. Full article
Show Figures

Figure 1

16 pages, 7816 KiB  
Article
The Initial Attitude Estimation of an Electromagnetic Projectile in the High-Temperature Flow Field Based on Mask R-CNN and the Multi-Constraints Genetic Algorithm
by Jinlong Chen, Miao Yu, Yongcai Guo and Chao Gao
Sensors 2025, 25(12), 3608; https://doi.org/10.3390/s25123608 - 8 Jun 2025
Viewed by 449
Abstract
During the launching process of electromagnetic projectiles, radiated noise, smoke, and debris will interfere with the line of sight and affect the accuracy of initial attitude estimation. To address this issue, an enhanced method that integrates Mask R-CNN and a multi-constraint genetic algorithm [...] Read more.
During the launching process of electromagnetic projectiles, radiated noise, smoke, and debris will interfere with the line of sight and affect the accuracy of initial attitude estimation. To address this issue, an enhanced method that integrates Mask R-CNN and a multi-constraint genetic algorithm was proposed. First, Mask R-CNN was utilized to perform pixel-level edge segmentation of the original image, followed by the Canny algorithm to extract the edge image. This edge image was then processed using the line segment detector (LSD) algorithm to identify the main structural components, characterized by line segments. An enhanced genetic algorithm was employed to restore the occluded edge image. A fitness function, constructed with Hamming distance (HD) constraints alongside initial parameter constraints defined by centroid displacement, was applied to boost convergence speed and avoid local optimization. The optimized search strategy minimized the HD constraint between the repaired stereo images to obtain accurate attitude output. An electromagnetic simulation device was utilized for the experiment. The proposed method was 13 times faster than the Structural Similarity Index (SSIM) method. In a single launch, the target with 70% occlusion was successfully recovered, achieving average deviations of 0.76°, 0.72°, and 0.44° in pitch, roll, and yaw angles, respectively. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

16 pages, 2204 KiB  
Article
EGSDK-Net: Edge-Guided Stepwise Dual Kernel Update Network for Panoptic Segmentation
by Pengyu Mu, Hongwei Zhao and Ke Ma
Algorithms 2025, 18(2), 71; https://doi.org/10.3390/a18020071 - 1 Feb 2025
Cited by 1 | Viewed by 808
Abstract
In recent years, panoptic segmentation has garnered increasing attention from researchers aiming to better understand scenes in images. Although many excellent studies have been proposed, they share some common unresolved issues. Firstly, panoptic segmentation, as a novel task, is still confined within inherent [...] Read more.
In recent years, panoptic segmentation has garnered increasing attention from researchers aiming to better understand scenes in images. Although many excellent studies have been proposed, they share some common unresolved issues. Firstly, panoptic segmentation, as a novel task, is still confined within inherent frameworks. Secondly, the prevalent kernel update strategies do not adequately utilize the information from each stage. To address these two issues, redwe propose an edge-guided stepwise dual kernel update network (EGSDK-Net) for panoptic segmentation; the core components are the real-time edge guidance module and the stepwise dual kernel update module. The first component, after extracting and positioning edge features through an extra branch, applies these features to the normally transmitted feature maps within the network to highlight the edges. The input image is initially processed with the Canny edge detector to generate and store the predicted edge map, which acts as the ground truth for supervising the extracted edge feature map. The stepwise dual kernel update module enhances the utilization of information by allowing each stage to update both its own kernel and that of the subsequent stage, thereby improving the judgment capabilities of the kernels. redEGSDK-Net achieves a PQ of 60.6, representing a 2.19% improvement over RT-K-Net. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
Show Figures

Figure 1

18 pages, 6951 KiB  
Article
Lightweight Deep Learning Framework for Accurate Detection of Sports-Related Bone Fractures
by Akmalbek Abdusalomov, Sanjar Mirzakhalilov, Sabina Umirzakova, Otabek Ismailov, Djamshid Sultanov, Rashid Nasimov and Young-Im Cho
Diagnostics 2025, 15(3), 271; https://doi.org/10.3390/diagnostics15030271 - 23 Jan 2025
Cited by 3 | Viewed by 1861
Abstract
Background/Objectives: Sports-related bone fractures are a common challenge in sports medicine, requiring accurate and timely diagnosis to prevent long-term complications and enable effective treatment. Conventional diagnostic methods often rely on manual interpretation, which is prone to errors and inefficiencies, particularly for subtle and [...] Read more.
Background/Objectives: Sports-related bone fractures are a common challenge in sports medicine, requiring accurate and timely diagnosis to prevent long-term complications and enable effective treatment. Conventional diagnostic methods often rely on manual interpretation, which is prone to errors and inefficiencies, particularly for subtle and localized fractures. This study aims to develop a lightweight and efficient deep learning-based framework to improve the accuracy and computational efficiency of fracture detection, tailored to the needs of sports medicine. Methods: We proposed a novel fracture detection framework based on the DenseNet121 architecture, incorporating modifications to the initial convolutional block and final layers for optimized feature extraction. Additionally, a Canny edge detector was integrated to enhance the model ability to detect localized structural discontinuities. A custom-curated dataset of radiographic images focused on common sports-related fractures was used, with preprocessing techniques such as contrast enhancement, normalization, and data augmentation applied to ensure robust model performance. The model was evaluated against state-of-the-art methods using metrics such as accuracy, recall, precision, and computational complexity. Results: The proposed model achieved a state-of-the-art accuracy of 90.3%, surpassing benchmarks like ResNet-50, VGG-16, and EfficientNet-B0. It demonstrated superior sensitivity (recall: 0.89) and specificity (precision: 0.875) while maintaining the lowest computational complexity (FLOPs: 0.54 G, Params: 14.78 M). These results highlight its suitability for real-time clinical deployment. Conclusions: The proposed lightweight framework offers a scalable, accurate, and efficient solution for fracture detection, addressing critical challenges in sports medicine. By enabling rapid and reliable diagnostics, it has the potential to improve clinical workflows and outcomes for athletes. Future work will focus on expanding the model applications to other imaging modalities and fracture types. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

16 pages, 1820 KiB  
Article
GAN-Based Map Generation Technique of Aerial Image Using Residual Blocks and Canny Edge Detector
by Jongwook Si and Sungyoung Kim
Appl. Sci. 2024, 14(23), 10963; https://doi.org/10.3390/app142310963 - 26 Nov 2024
Cited by 1 | Viewed by 1168
Abstract
As the significance of meticulous and precise map creation grows in modern Geographic Information Systems (GISs), urban planning, disaster response, and other domains, the necessity for sophisticated map generation technology has become increasingly evident. In response to this demand, this paper puts forward [...] Read more.
As the significance of meticulous and precise map creation grows in modern Geographic Information Systems (GISs), urban planning, disaster response, and other domains, the necessity for sophisticated map generation technology has become increasingly evident. In response to this demand, this paper puts forward a technique based on Generative Adversarial Networks (GANs) for converting aerial imagery into high-quality maps. The proposed method, comprising a generator and a discriminator, introduces novel strategies to overcome existing challenges; namely, the use of a Canny edge detector and Residual Blocks. The proposed loss function enhances the generator’s performance by assigning greater weight to edge regions using the Canny edge map and eliminating superfluous information. This approach enhances the visual quality of the generated maps and ensures the accurate capture of fine details. The experimental results demonstrate that this method generates maps of superior visual quality, achieving outstanding performance compared to existing methodologies. The results show that the proposed technology has significant potential for practical applications in a range of real-world scenarios. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Image Processing)
Show Figures

Figure 1

23 pages, 12210 KiB  
Article
Mixed Reality-Based Concrete Crack Detection and Skeleton Extraction Using Deep Learning and Image Processing
by Davood Shojaei, Peyman Jafary and Zezheng Zhang
Electronics 2024, 13(22), 4426; https://doi.org/10.3390/electronics13224426 - 12 Nov 2024
Cited by 2 | Viewed by 2325
Abstract
Advancements in image processing and deep learning offer considerable opportunities for automated defect assessment in civil structures. However, these systems cannot work interactively with human inspectors. Mixed reality (MR) can be adopted to address this by involving inspectors in various stages of the [...] Read more.
Advancements in image processing and deep learning offer considerable opportunities for automated defect assessment in civil structures. However, these systems cannot work interactively with human inspectors. Mixed reality (MR) can be adopted to address this by involving inspectors in various stages of the assessment process. This paper integrates You Only Look Once (YOLO) v5n and YOLO v5m with the Canny algorithm for real-time concrete crack detection and skeleton extraction with a Microsoft HoloLens 2 MR device. The YOLO v5n demonstrates a superior mean average precision (mAP) 0.5 and speed, while YOLO v5m achieves the highest mAP 0.5 0.95 among the other YOLO v5 structures. The Canny algorithm also outperforms the Sobel and Prewitt edge detectors with the highest F1 score. The developed MR-based system could not only be employed for real-time defect assessment but also be utilized for the automatic recording of the location and other specifications of the cracks for further analysis and future re-inspections. Full article
Show Figures

Figure 1

21 pages, 17635 KiB  
Article
Evaluation of Image Segmentation Methods for In Situ Quality Assessment in Additive Manufacturing
by Tushar Saini, Panos S. Shiakolas and Christopher McMurrough
Metrology 2024, 4(4), 598-618; https://doi.org/10.3390/metrology4040037 - 1 Nov 2024
Cited by 2 | Viewed by 1890
Abstract
Additive manufacturing (AM), or 3D printing, has revolutionized the fabrication of complex parts, but assessing their quality remains a challenge. Quality assessment, especially for the interior part geometry, relies on post-print inspection techniques unsuitable for real-time in situ analysis. Vision-based approaches could be [...] Read more.
Additive manufacturing (AM), or 3D printing, has revolutionized the fabrication of complex parts, but assessing their quality remains a challenge. Quality assessment, especially for the interior part geometry, relies on post-print inspection techniques unsuitable for real-time in situ analysis. Vision-based approaches could be employed to capture images of any layer during fabrication, and then segmentation methods could be used to identify in-layer features in order to establish dimensional conformity and detect defects for in situ evaluation of the overall part quality. This research evaluated five image segmentation methods (simple thresholding, adaptive thresholding, Sobel edge detector, Canny edge detector, and watershed transform) on the same platform for their effectiveness in isolating and identifying features in 3D-printed layers under different contrast conditions for in situ quality assessment. The performance metrics used are accuracy, precision, recall, and the Jaccard index. The experimental set-up is based on an open-frame fused filament fabrication printer augmented with a vision system. The control system software for printing and imaging (acquisition and processing) was custom developed in Python running on a Raspberry Pi. Most of the segmentation methods reliably segmented the external geometry and high-contrast internal features. The simple thresholding, Canny edge detector, and watershed transform methods did not perform well with low-contrast parts and could not reliably segment internal features when the previous layer was visible. The adaptive thresholding and Sobel edge detector methods segmented high- and low-contrast features. However, the segmentation outputs were heavily affected by textural and image noise. The research identified factors affecting the performance and limitations of these segmentation methods and contributing to the broader effort of improving in situ quality assessment in AM, such as automatic dimensional analysis of internal and external features and the overall geometry. Full article
Show Figures

Figure 1

14 pages, 3336 KiB  
Article
Dazzling Evaluation of the Impact of a High-Repetition-Rate CO2 Pulsed Laser on Infrared Imaging Systems
by Hanyu Zheng, Yunzhe Wang, Yang Liu, Tao Sun and Junfeng Shao
Sensors 2024, 24(6), 1827; https://doi.org/10.3390/s24061827 - 12 Mar 2024
Viewed by 1706
Abstract
This article utilizes the Canny edge extraction algorithm based on contour curvature and the cross-correlation template matching algorithm to extensively study the impact of a high-repetition-rate CO2 pulsed laser on the target extraction and tracking performance of an infrared imaging detector. It [...] Read more.
This article utilizes the Canny edge extraction algorithm based on contour curvature and the cross-correlation template matching algorithm to extensively study the impact of a high-repetition-rate CO2 pulsed laser on the target extraction and tracking performance of an infrared imaging detector. It establishes a quantified dazzling pattern for lasers on infrared imaging systems. By conducting laser dazzling and damage experiments, a detailed analysis of the normalized correlation between the target and the dazzling images is performed to quantitatively describe the laser dazzling effects. Simultaneously, an evaluation system, including target distance and laser power evaluation factors, is established to determine the dazzling level and whether the target is recognizable. The research results reveal that the laser power and target position are crucial factors affecting the detection performance of infrared imaging detector systems under laser dazzling. Different laser powers are required to successfully interfere with the recognition algorithm of the infrared imaging detector at different distances. And laser dazzling produces a considerable quantity of false edge information, which seriously affects the performance of the pattern recognition algorithm. In laser damage experiments, the detector experienced functional damage, with a quarter of the image displaying as completely black. The energy density threshold required for the functional damage of the detector is approximately 3 J/cm2. The dazzling assessment conclusions also apply to the evaluation of the damage results. Finally, the proposed evaluation formula aligns with the experimental results, objectively reflecting the actual impact of laser dazzling on the target extraction and the tracking performance of infrared imaging systems. This study provides an in-depth and accurate analysis for understanding the influence of lasers on the performance of infrared imaging detectors. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

21 pages, 65463 KiB  
Article
Integrated Framework for Unsupervised Building Segmentation with Segment Anything Model-Based Pseudo-Labeling and Weakly Supervised Learning
by Jiyong Kim and Yongil Kim
Remote Sens. 2024, 16(3), 526; https://doi.org/10.3390/rs16030526 - 30 Jan 2024
Cited by 3 | Viewed by 3860
Abstract
The Segment Anything Model (SAM) has had a profound impact on deep learning applications in remote sensing. SAM, which serves as a prompt-based foundation model for segmentation, exhibits a remarkable capability to “segment anything,” including building objects on satellite or airborne images. To [...] Read more.
The Segment Anything Model (SAM) has had a profound impact on deep learning applications in remote sensing. SAM, which serves as a prompt-based foundation model for segmentation, exhibits a remarkable capability to “segment anything,” including building objects on satellite or airborne images. To facilitate building segmentation without inducing supplementary prompts or labels, we applied a sequential approach of generating pseudo-labels and incorporating an edge-driven model. We first segmented the entire scene by SAM and masked out unwanted objects to generate pseudo-labels. Subsequently, we employed an edge-driven model designed to enhance the pseudo-label by using edge information to reconstruct the imperfect building features. Our model simultaneously utilizes spectral features from SAM-oriented building pseudo-labels and edge features from resultant images from the Canny edge detector and, thus, when combined with conditional random fields (CRFs), shows capability to extract and learn building features from imperfect pseudo-labels. By integrating the SAM-based pseudo-label with our edge-driven model, we establish an unsupervised framework for building segmentation that operates without explicit labels. Our model excels in extracting buildings compared with other state-of-the-art unsupervised segmentation models and even outperforms supervised models when trained in a fully supervised manner. This achievement demonstrates the potential of our model to address the lack of datasets in various remote sensing domains for building segmentation. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

24 pages, 34825 KiB  
Article
Self-Supervised Transformers for Unsupervised SAR Complex Interference Detection Using Canny Edge Detector
by Yugang Feng, Bing Han, Xiaochen Wang, Jiayuan Shen, Xin Guan and Hao Ding
Remote Sens. 2024, 16(2), 306; https://doi.org/10.3390/rs16020306 - 11 Jan 2024
Cited by 9 | Viewed by 2017
Abstract
As the electromagnetic environment becomes increasingly complex, a synthetic aperture radar (SAR) system with wideband active transmission and reception is vulnerable to interference from devices at the same frequency. SAR interference detection using the transform domain has become a research hotspot in recent [...] Read more.
As the electromagnetic environment becomes increasingly complex, a synthetic aperture radar (SAR) system with wideband active transmission and reception is vulnerable to interference from devices at the same frequency. SAR interference detection using the transform domain has become a research hotspot in recent years. However, existing transform domain interference detection methods exhibit unsatisfactory performance in complex interference environments. Moreover, most of them rely on label information, while existing publicly available interference datasets are limited. To solve these problems, this paper proposes an SAR unsupervised interference detection model that combines Canny edge detection with vision transformer (CEVIT). Using a time–frequency spectrogram as input, CEVIT realizes interference detection in complex interference environments with multi-interference and multiple types of interference by means of a feature extraction module and a detection head module. To validate the performance of the proposed model, experiments are conducted on airborne SAR interference simulation data and Sentinel-1 real interference data. The experimental results show that, compared with the other object detection models, CEVIT has the best interference detection performance in a complex interference environment, and the key evaluation indexes (e.g., Recall and F1-score) are improved by nearly 20%. The detection results on the real interfered echo data have a Recall that reaches 0.8722 and an F1-score that reaches 0.9115, which are much better than those of the compared methods, and the results also indicate that the proposed model achieves good detection performance with a fast detection speed in complex interference environments, which has certain practical application value in the interference detection problem of the SAR system. Full article
Show Figures

Graphical abstract

12 pages, 5459 KiB  
Article
Integrating Computer Vision and CAD for Precise Dimension Extraction and 3D Solid Model Regeneration for Enhanced Quality Assurance
by Binayak Bhandari and Prakash Manandhar
Machines 2023, 11(12), 1083; https://doi.org/10.3390/machines11121083 - 12 Dec 2023
Cited by 4 | Viewed by 3209
Abstract
This paper focuses on the development of an integrated system that can rapidly and accurately extract the geometrical dimensions of a physical object assisted by a robotic hand and generate a 3D model of an object in a popular commercial Computer-Aided Design (CAD) [...] Read more.
This paper focuses on the development of an integrated system that can rapidly and accurately extract the geometrical dimensions of a physical object assisted by a robotic hand and generate a 3D model of an object in a popular commercial Computer-Aided Design (CAD) software using computer vision. Two sets of experiments were performed: one with a simple cubical object and the other with a more complex geometry that needed photogrammetry to redraw it in the CAD system. For the accurate positioning of the object, a robotic hand was used. An Internet of Things (IoT) based camera unit was used for capturing the image and wirelessly transmitting it over the network. Computer vision algorithms such as GrabCut, Canny edge detector, and morphological operations were used for extracting border points of the input. The coordinates of the vertices of the solids were then transferred to the Computer-Aided Design (CAD) software via a macro to clean and generate the border curve. Finally, a 3D solid model is generated by linear extrusion based on the curve generated in CATIA. The results showed excellent regeneration of an object. This research makes two significant contributions. Firstly, it introduces an integrated system designed to achieve precise dimension extraction from solid objects. Secondly, it presents a method for regenerating intricate 3D solids with consistent cross-sections. The proposed system holds promise for a wide range of applications, including automatic 3D object reconstruction and quality assurance of 3D-printed objects, addressing potential defects arising from factors such as shrinkage and calibration, all with minimal user intervention. Full article
(This article belongs to the Special Issue Smart Processes for Machines, Maintenance and Manufacturing Processes)
Show Figures

Figure 1

19 pages, 1864 KiB  
Article
COVID-19 Detection from Chest X-ray Images Based on Deep Learning Techniques
by Shubham Mathesul, Debabrata Swain, Santosh Kumar Satapathy, Ayush Rambhad, Biswaranjan Acharya, Vassilis C. Gerogiannis and Andreas Kanavos
Algorithms 2023, 16(10), 494; https://doi.org/10.3390/a16100494 - 23 Oct 2023
Cited by 15 | Viewed by 4499
Abstract
The COVID-19 pandemic has posed significant challenges in accurately diagnosing the disease, as severe cases may present symptoms similar to pneumonia. Real-Time Reverse Transcriptase Polymerase Chain Reaction (RT-PCR) is the conventional diagnostic technique; however, it has limitations in terms of time-consuming laboratory procedures [...] Read more.
The COVID-19 pandemic has posed significant challenges in accurately diagnosing the disease, as severe cases may present symptoms similar to pneumonia. Real-Time Reverse Transcriptase Polymerase Chain Reaction (RT-PCR) is the conventional diagnostic technique; however, it has limitations in terms of time-consuming laboratory procedures and kit availability. Radiological chest images, such as X-rays and Computed Tomography (CT) scans, have been essential in aiding the diagnosis process. In this research paper, we propose a deep learning (DL) approach based on Convolutional Neural Networks (CNNs) to enhance the detection of COVID-19 and its variants from chest X-ray images. Building upon the existing research in SARS and COVID-19 identification using AI and machine learning techniques, our DL model aims to extract the most significant features from the X-ray scans of affected individuals. By employing an explanatory CNN-based technique, we achieved a promising accuracy of up to 97% in detecting COVID-19 cases, which can assist physicians in effectively screening and identifying probable COVID-19 patients. This study highlights the potential of DL in medical imaging, specifically in detecting COVID-19 from radiological images. The improved accuracy of our model demonstrates its efficacy in aiding healthcare professionals and mitigating the spread of the disease. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Imaging)
Show Figures

Figure 1

16 pages, 33690 KiB  
Article
MSCF: Multi-Scale Canny Filter to Recognize Cells in Microscopic Images
by Almoutaz Mbaidin, Eva Cernadas, Zakaria A. Al-Tarawneh, Manuel Fernández-Delgado, Rosario Domínguez-Petit, Sonia Rábade-Uberos and Ahmad Hassanat
Sustainability 2023, 15(18), 13693; https://doi.org/10.3390/su151813693 - 13 Sep 2023
Cited by 4 | Viewed by 1768
Abstract
Fish fecundity is one of the most relevant parameters for the estimation of the reproductive potential of fish stocks, used to assess the stock status to guarantee sustainable fisheries management. Fecundity is the number of matured eggs that each female fish can spawn [...] Read more.
Fish fecundity is one of the most relevant parameters for the estimation of the reproductive potential of fish stocks, used to assess the stock status to guarantee sustainable fisheries management. Fecundity is the number of matured eggs that each female fish can spawn each year. The stereological method is the most accurate technique to estimate fecundity using histological images of fish ovaries, in which matured oocytes must be measured and counted. A new segmentation technique, named the multi-scale Canny filter (MSCF), is proposed to recognize the boundaries of cells (oocytes), based on the Canny edge detector. Our results show the superior performance of MSCF on five fish species compared to five other state-of-the-art segmentation methods. It provides the highest F1 score in four out of five fish species, with values between 70% and 80%, and the highest percentage of correctly recognized cells, between 52% and 64%. This type of research aids in the promotion of sustainable fisheries management and conservation efforts, decreases research’s environmental impact and gives important insights into the health of fish populations and marine ecosystems. Full article
Show Figures

Figure 1

22 pages, 24539 KiB  
Article
Enhancing Lane-Tracking Performance in Challenging Driving Environments through Parameter Optimization and a Restriction System
by Seung-Hwan Lee, Hyuk-Ju Kwon and Sung-Hak Lee
Appl. Sci. 2023, 13(16), 9313; https://doi.org/10.3390/app13169313 - 16 Aug 2023
Cited by 4 | Viewed by 1490
Abstract
The autonomous driving market has experienced rapid growth in recent times. From systems that assist drivers in keeping within their lanes to systems that recognize obstacles using sensors and then handle those obstacles, there are various types of systems in autonomous driving. The [...] Read more.
The autonomous driving market has experienced rapid growth in recent times. From systems that assist drivers in keeping within their lanes to systems that recognize obstacles using sensors and then handle those obstacles, there are various types of systems in autonomous driving. The sensors used in autonomous driving systems include infrared detection devices, lidar, ultrasonic sensors, and cameras. Among these sensors, cameras are widely used. This paper proposes a method for stable lane detection from images captured by camera sensors in diverse environments. First, the system utilizes a bilateral filter and multiscale retinex (MSR) with experimentally optimized set parameters to suppress image noise while increasing contrast. Subsequently, the Canny edge detector is employed to detect the edges of the lane candidates, followed by utilizing the Hough transform to make straight lines from the land candidate images. Then, using a proposed restriction system, only the two lines that the current vehicle is actively driving within are detected from the candidate lines. Furthermore, the lane position information from the previous frame is combined with the lane information from the current frame to correct the current lane position. The Kalman filter is then used to predict the lane position in the next frame. The proposed lane-detection method was evaluated in various scenarios, including rainy conditions, low-light nighttime environments with minimal street lighting, scenarios with interfering guidelines within the lane area, and scenarios with significant noise caused by water droplets on the camera. Both qualitative and quantitative experimental results demonstrate that the lane-detection method presented in this paper effectively suppresses noise and accurately detects the two active lanes during driving. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

16 pages, 7917 KiB  
Article
Quantification of Agricultural Terrace Degradation in the Loess Plateau Using UAV-Based Digital Elevation Model and Imagery
by Xuan Fang, Zhujun Gu and Ying Zhu
Sustainability 2023, 15(14), 10800; https://doi.org/10.3390/su151410800 - 10 Jul 2023
Cited by 4 | Viewed by 1591
Abstract
Agricultural terraces are important artificial landforms on the Loess Plateau of China and have many ecosystem services (e.g., agricultural production, soil and water conservation). Due to the loss of rural labor, a large number of agricultural terraces have been abandoned and then the [...] Read more.
Agricultural terraces are important artificial landforms on the Loess Plateau of China and have many ecosystem services (e.g., agricultural production, soil and water conservation). Due to the loss of rural labor, a large number of agricultural terraces have been abandoned and then the degradation of terraces, caused by rainstorm and lack of management, threatens the sustainability of ecological services on terraces. Our previous study has found its geomorphological evidence (sinkhole and collapse). However, no quantitative indicators of terrace degradation are identified from the perspective of microtopography change. A framework for quantifying terrace degradation was established in this study based on unmanned aerial vehicle photogrammetry and digital topographic analysis. The Pujiawa terraces in the Loess Plateau were selected as study areas. Firstly, the terrace ridges were extracted by a Canny edge detector based on high-resolution digital elevation model (DEM) data. The adaptive method was used to calculate the low and high thresholds automatically. This method ensures the low complexity and high-edge continuity and accuracy of the Canny edge detector, which is superior to the manual setting and maximum inter-class variance (Otsu) method. Secondly, the DEMs of the terrace slope before degradation were rebuilt through the terrain analysis method based on the extracted terrace ridges and current DEM data. Finally, the degradation of terraces was quantified by the index series in the line, surface and volume aspects, which are the damage degrees of the terrace ridges, terrace surface and whole terrace. The damage degrees of the terrace ridges were calculated according to the extracted and generalised terrace ridges. The damage degrees of the terrace surface and whole terrace were calculated based on the differences of DEMs before and after degradation. The proposed indices and quantitative methods for evaluating agricultural terrace degradation reflect the erosion status of the terraces in topography. This work provides data and references for loess terrace landscape protection and its sustainable management. Full article
Show Figures

Figure 1

Back to TopTop