Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = Sobel edge detector

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 7090 KiB  
Article
An Infrared and Visible Image Alignment Method Based on Gradient Distribution Properties and Scale-Invariant Features in Electric Power Scenes
by Lin Zhu, Yuxing Mao, Chunxu Chen and Lanjia Ning
J. Imaging 2025, 11(1), 23; https://doi.org/10.3390/jimaging11010023 - 13 Jan 2025
Viewed by 1113
Abstract
In grid intelligent inspection systems, automatic registration of infrared and visible light images in power scenes is a crucial research technology. Since there are obvious differences in key attributes between visible and infrared images, direct alignment is often difficult to achieve the expected [...] Read more.
In grid intelligent inspection systems, automatic registration of infrared and visible light images in power scenes is a crucial research technology. Since there are obvious differences in key attributes between visible and infrared images, direct alignment is often difficult to achieve the expected results. To overcome the high difficulty of aligning infrared and visible light images, an image alignment method is proposed in this paper. First, we use the Sobel operator to extract the edge information of the image pair. Second, the feature points in the edges are recognised by a curvature scale space (CSS) corner detector. Third, the Histogram of Orientation Gradients (HOG) is extracted as the gradient distribution characteristics of the feature points, which are normalised with the Scale Invariant Feature Transform (SIFT) algorithm to form feature descriptors. Finally, initial matching and accurate matching are achieved by the improved fast approximate nearest-neighbour matching method and adaptive thresholding, respectively. Experiments show that this method can robustly match the feature points of image pairs under rotation, scale, and viewpoint differences, and achieves excellent matching results. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Graphical abstract

19 pages, 4245 KiB  
Article
Lightweight UAV Small Target Detection and Perception Based on Improved YOLOv8-E
by Yongjuan Zhao, Lijin Wang, Guannan Lei, Chaozhe Guo and Qiang Ma
Drones 2024, 8(11), 681; https://doi.org/10.3390/drones8110681 - 19 Nov 2024
Cited by 4 | Viewed by 2627
Abstract
Traditional unmanned aerial vehicle (UAV) detection methods struggle with multi-scale variations during flight, complex backgrounds, and low accuracy, whereas existing deep learning detection methods have high accuracy but high dependence on equipment, making it difficult to detect small UAV targets efficiently. To address [...] Read more.
Traditional unmanned aerial vehicle (UAV) detection methods struggle with multi-scale variations during flight, complex backgrounds, and low accuracy, whereas existing deep learning detection methods have high accuracy but high dependence on equipment, making it difficult to detect small UAV targets efficiently. To address the above challenges, this paper proposes an improved lightweight high-precision model, YOLOv8-E (Enhanced YOLOv8), for the fast and accurate detection and identification of small UAVs in complex environments. First, a Sobel filter is introduced to enhance the C2f module to form the C2f-ESCFFM (Edge-Sensitive Cross-Stage Feature Fusion Module) module, which achieves higher computational efficiency and feature representation capacity while preserving detection accuracy as much as possible by fusing the SobelConv branch for edge extraction and the convolution branch to extract spatial information. Second, the neck network is based on the HSFPN (High-level Screening-feature Pyramid Network) architecture, and the CAA (Context Anchor Attention) mechanism is introduced to enhance the semantic parsing of low-level features to form a new CAHS-FPN (Context-Augmented Hierarchical Scale Feature Pyramid Network) network, enabling the fusion of deep and shallow features. This improves the feature representation capability of the model, allowing it to detect targets of different sizes efficiently. Finally, the optimized detail-enhanced convolution (DEConv) technique is introduced into the head network, forming the LSCOD (Lightweight Shared Convolutional Object Detector Head) module, enhancing the generalization ability of the model by integrating a priori information and adopting the strategy of shared convolution. This ensures that the model enhances its localization and classification performance without increasing parameters or computational costs, thus effectively improving the detection performance of small UAV targets. The experimental results show that compared with the baseline model, the YOLOv8-E model achieved (mean average precision at IoU = 0.5) an mAP@0.5 improvement of 6.3%, reaching 98.4%, whereas the model parameter scale was reduced by more than 50%. Overall, YOLOv8-E significantly reduces the demand for computational resources while ensuring high-precision detection. Full article
Show Figures

Figure 1

23 pages, 12210 KiB  
Article
Mixed Reality-Based Concrete Crack Detection and Skeleton Extraction Using Deep Learning and Image Processing
by Davood Shojaei, Peyman Jafary and Zezheng Zhang
Electronics 2024, 13(22), 4426; https://doi.org/10.3390/electronics13224426 - 12 Nov 2024
Cited by 2 | Viewed by 2334
Abstract
Advancements in image processing and deep learning offer considerable opportunities for automated defect assessment in civil structures. However, these systems cannot work interactively with human inspectors. Mixed reality (MR) can be adopted to address this by involving inspectors in various stages of the [...] Read more.
Advancements in image processing and deep learning offer considerable opportunities for automated defect assessment in civil structures. However, these systems cannot work interactively with human inspectors. Mixed reality (MR) can be adopted to address this by involving inspectors in various stages of the assessment process. This paper integrates You Only Look Once (YOLO) v5n and YOLO v5m with the Canny algorithm for real-time concrete crack detection and skeleton extraction with a Microsoft HoloLens 2 MR device. The YOLO v5n demonstrates a superior mean average precision (mAP) 0.5 and speed, while YOLO v5m achieves the highest mAP 0.5 0.95 among the other YOLO v5 structures. The Canny algorithm also outperforms the Sobel and Prewitt edge detectors with the highest F1 score. The developed MR-based system could not only be employed for real-time defect assessment but also be utilized for the automatic recording of the location and other specifications of the cracks for further analysis and future re-inspections. Full article
Show Figures

Figure 1

21 pages, 17635 KiB  
Article
Evaluation of Image Segmentation Methods for In Situ Quality Assessment in Additive Manufacturing
by Tushar Saini, Panos S. Shiakolas and Christopher McMurrough
Metrology 2024, 4(4), 598-618; https://doi.org/10.3390/metrology4040037 - 1 Nov 2024
Cited by 2 | Viewed by 1892
Abstract
Additive manufacturing (AM), or 3D printing, has revolutionized the fabrication of complex parts, but assessing their quality remains a challenge. Quality assessment, especially for the interior part geometry, relies on post-print inspection techniques unsuitable for real-time in situ analysis. Vision-based approaches could be [...] Read more.
Additive manufacturing (AM), or 3D printing, has revolutionized the fabrication of complex parts, but assessing their quality remains a challenge. Quality assessment, especially for the interior part geometry, relies on post-print inspection techniques unsuitable for real-time in situ analysis. Vision-based approaches could be employed to capture images of any layer during fabrication, and then segmentation methods could be used to identify in-layer features in order to establish dimensional conformity and detect defects for in situ evaluation of the overall part quality. This research evaluated five image segmentation methods (simple thresholding, adaptive thresholding, Sobel edge detector, Canny edge detector, and watershed transform) on the same platform for their effectiveness in isolating and identifying features in 3D-printed layers under different contrast conditions for in situ quality assessment. The performance metrics used are accuracy, precision, recall, and the Jaccard index. The experimental set-up is based on an open-frame fused filament fabrication printer augmented with a vision system. The control system software for printing and imaging (acquisition and processing) was custom developed in Python running on a Raspberry Pi. Most of the segmentation methods reliably segmented the external geometry and high-contrast internal features. The simple thresholding, Canny edge detector, and watershed transform methods did not perform well with low-contrast parts and could not reliably segment internal features when the previous layer was visible. The adaptive thresholding and Sobel edge detector methods segmented high- and low-contrast features. However, the segmentation outputs were heavily affected by textural and image noise. The research identified factors affecting the performance and limitations of these segmentation methods and contributing to the broader effort of improving in situ quality assessment in AM, such as automatic dimensional analysis of internal and external features and the overall geometry. Full article
Show Figures

Figure 1

24 pages, 7970 KiB  
Article
Research on Luggage Package Extraction of X-ray Images Based on Edge Sensitive Multi-Channel Background Difference Algorithm
by Xueping Song, Shuyu Zhang, Jianming Yang and Jicun Zhang
Appl. Sci. 2023, 13(21), 11981; https://doi.org/10.3390/app132111981 - 2 Nov 2023
Cited by 2 | Viewed by 1511
Abstract
Many security detectors do not have the ability to output individual luggage package images and are not compatible with deep learning algorithms. In this paper, a luggage package extraction of X-ray images based on the ES-MBD (Edge Sensitive Multi-channel Background Difference Algorithm) method [...] Read more.
Many security detectors do not have the ability to output individual luggage package images and are not compatible with deep learning algorithms. In this paper, a luggage package extraction of X-ray images based on the ES-MBD (Edge Sensitive Multi-channel Background Difference Algorithm) method is proposed, which is aiming at the problem that background difference binarization is insensitive to texture features and edge detection binarization is insensitive to smooth areas. In this method, X-ray luggage package images from complex original video images are used as a key target, the RGB three-channel background difference is calculated from the original X-ray image, the edge detection of the grayscale map is performed using the Sobel operator optimized by local gradient enhancement, and the morphological expansion process is performed on the combined results to obtain the complete wrapping target. The Suzuki algorithm is used to detect the outline of the binarized package image, match the package frame area and determine the key target. The ES-MBD method solves the problem of information loss in the traditional binarization method, and retains the information of insensitive regions while reducing noise. Through experimental comparison, the accuracy of ES-MBD binarization method reaches 97.3%, the recall rate reaches 96.5%, and ES-MBD method has obvious advantages in key target extraction of X-ray images. Full article
Show Figures

Figure 1

16 pages, 9702 KiB  
Article
Method and Installation for Efficient Automatic Defect Inspection of Manufactured Paper Bowls
by Shaoyong Yu, Yang-Han Lee, Cheng-Wen Chen, Peng Gao, Zhigang Xu, Shunyi Chen and Cheng-Fu Yang
Photonics 2023, 10(6), 686; https://doi.org/10.3390/photonics10060686 - 14 Jun 2023
Cited by 2 | Viewed by 1744
Abstract
Various techniques were combined to optimize an optical inspection system designed to automatically inspect defects in manufactured paper bowls. A self-assembled system was utilized to capture images of defects on the bowls. The system employed an image sensor with a multi-pixel array that [...] Read more.
Various techniques were combined to optimize an optical inspection system designed to automatically inspect defects in manufactured paper bowls. A self-assembled system was utilized to capture images of defects on the bowls. The system employed an image sensor with a multi-pixel array that combined a complementary metal-oxide semiconductor and a photo detector. A combined ring light served as the light source, while an infrared (IR) LED matrix panel was used to provide constant IR light to highlight the outer edges of the objects being inspected. The techniques employed in this study to enhance defect inspections on produced paper bowls included Gaussian filtering, Sobel operators, binarization, and connected components. Captured images were processed using these technologies. Once the non-contact inspection system’s machine vision method was completed, defects on the produced paper bowls were inspected using the system developed in this study. Three inspection methods were used in this study: internal inspection, external inspection, and bottom inspection. All three methods were able to inspect surface features of produced paper bowls, including dirt, burrs, holes, and uneven thickness. The results of our study showed that the average time required for machine vision inspections of each paper bowl was significantly less than the time required for manual inspection. Therefore, the investigated machine vision system is an efficient method for inspecting defects in fabricated paper bowls. Full article
(This article belongs to the Special Issue Advanced Photonics Sensors, Sources, Systems and Applications)
Show Figures

Figure 1

13 pages, 5450 KiB  
Article
Deep Edge Detection Methods for the Automatic Calculation of the Breast Contour
by Nuno Freitas, Daniel Silva, Carlos Mavioso, Maria J. Cardoso and Jaime S. Cardoso
Bioengineering 2023, 10(4), 401; https://doi.org/10.3390/bioengineering10040401 - 24 Mar 2023
Cited by 3 | Viewed by 1920
Abstract
Breast cancer conservative treatment (BCCT) is a form of treatment commonly used for patients with early breast cancer. This procedure consists of removing the cancer and a small margin of surrounding tissue, while leaving the healthy tissue intact. In recent years, this procedure [...] Read more.
Breast cancer conservative treatment (BCCT) is a form of treatment commonly used for patients with early breast cancer. This procedure consists of removing the cancer and a small margin of surrounding tissue, while leaving the healthy tissue intact. In recent years, this procedure has become increasingly common due to identical survival rates and better cosmetic outcomes than other alternatives. Although significant research has been conducted on BCCT, there is no gold-standard for evaluating the aesthetic results of the treatment. Recent works have proposed the automatic classification of cosmetic results based on breast features extracted from digital photographs. The computation of most of these features requires the representation of the breast contour, which becomes key to the aesthetic evaluation of BCCT. State-of-the-art methods use conventional image processing tools that automatically detect breast contours based on the shortest path applied to the Sobel filter result in a 2D digital photograph of the patient. However, because the Sobel filter is a general edge detector, it treats edges indistinguishably, i.e., it detects too many edges that are not relevant to breast contour detection and too few weak breast contours. In this paper, we propose an improvement to this method that replaces the Sobel filter with a novel neural network solution to improve breast contour detection based on the shortest path. The proposed solution learns effective representations for the edges between the breasts and the torso wall. We obtain state of the art results on a dataset that was used for developing previous models. Furthermore, we tested these models on a new dataset that contains more variable photographs and show that this new approach shows better generalization capabilities as the previously developed deep models do not perform so well when faced with a different dataset for testing. The main contribution of this paper is to further improve the capabilities of models that perform the objective classification of BCCT aesthetic results automatically by improving upon the current standard technique for detecting breast contours in digital photographs. To that end, the models introduced are simple to train and test on new datasets which makes this approach easily reproducible. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

15 pages, 27758 KiB  
Article
Fractional-Order Colour Image Processing
by Manuel Henriques, Duarte Valério, Paulo Gordo and Rui Melicio
Mathematics 2021, 9(5), 457; https://doi.org/10.3390/math9050457 - 24 Feb 2021
Cited by 20 | Viewed by 3405
Abstract
Many image processing algorithms make use of derivatives. In such cases, fractional derivatives allow an extra degree of freedom, which can be used to obtain better results in applications such as edge detection. Published literature concentrates on grey-scale images; in this paper, algorithms [...] Read more.
Many image processing algorithms make use of derivatives. In such cases, fractional derivatives allow an extra degree of freedom, which can be used to obtain better results in applications such as edge detection. Published literature concentrates on grey-scale images; in this paper, algorithms of six fractional detectors for colour images are implemented, and their performance is illustrated. The algorithms are: Canny, Sobel, Roberts, Laplacian of Gaussian, CRONE, and fractional derivative. Full article
(This article belongs to the Special Issue Fractional Calculus and Nonlinear Systems)
Show Figures

Figure 1

14 pages, 6386 KiB  
Article
Image Edge Detector with Gabor Type Filters Using a Spiking Neural Network of Biologically Inspired Neurons
by Krishnamurthy V. Vemuru
Algorithms 2020, 13(7), 165; https://doi.org/10.3390/a13070165 - 9 Jul 2020
Cited by 14 | Viewed by 5188
Abstract
We report the design of a Spiking Neural Network (SNN) edge detector with biologically inspired neurons that has a conceptual similarity with both Hodgkin-Huxley (HH) model neurons and Leaky Integrate-and-Fire (LIF) neurons. The computation of the membrane potential, which is used to determine [...] Read more.
We report the design of a Spiking Neural Network (SNN) edge detector with biologically inspired neurons that has a conceptual similarity with both Hodgkin-Huxley (HH) model neurons and Leaky Integrate-and-Fire (LIF) neurons. The computation of the membrane potential, which is used to determine the occurrence or absence of spike events, at each time step, is carried out by using the analytical solution to a simplified version of the HH neuron model. We find that the SNN based edge detector detects more edge pixels in images than those obtained by a Sobel edge detector. We designed a pipeline for image classification with a low-exposure frame simulation layer, SNN edge detection layers as pre-processing layers and a Convolutional Neural Network (CNN) as a classification module. We tested this pipeline for the task of classification with the Digits dataset, which is available in MATLAB. We find that the SNN based edge detection layer increases the image classification accuracy at lower exposure times, that is, for 1 < t < T /4, where t is the number of milliseconds in a simulated exposure frame and T is the total exposure time, with reference to a Sobel edge or Canny edge detection layer in the pipeline. These results pave the way for developing novel cognitive neuromorphic computing architectures for millisecond timescale detection and object classification applications using event or spike cameras. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms for Image Processing)
Show Figures

Graphical abstract

16 pages, 4947 KiB  
Article
Benchmarking Image Processing Algorithms for Unmanned Aerial System-Assisted Crack Detection in Concrete Structures
by Sattar Dorafshan, Robert J. Thomas and Marc Maguire
Infrastructures 2019, 4(2), 19; https://doi.org/10.3390/infrastructures4020019 - 30 Apr 2019
Cited by 55 | Viewed by 8498
Abstract
This paper summarizes the results of traditional image processing algorithms for detection of defects in concrete using images taken by Unmanned Aerial Systems (UASs). Such algorithms are useful for improving the accuracy of crack detection during autonomous inspection of bridges and other structures, [...] Read more.
This paper summarizes the results of traditional image processing algorithms for detection of defects in concrete using images taken by Unmanned Aerial Systems (UASs). Such algorithms are useful for improving the accuracy of crack detection during autonomous inspection of bridges and other structures, and they have yet to be compared and evaluated on a dataset of concrete images taken by UAS. The authors created a generic image processing algorithm for crack detection, which included the major steps of filter design, edge detection, image enhancement, and segmentation, designed to uniformly compare different edge detectors. Edge detection was carried out by six filters in the spatial (Roberts, Prewitt, Sobel, and Laplacian of Gaussian) and frequency (Butterworth and Gaussian) domains. These algorithms were applied to fifty images each of defected and sound concrete. Performances of the six filters were compared in terms of accuracy, precision, minimum detectable crack width, computational time, and noise-to-signal ratio. In general, frequency domain techniques were slower than spatial domain methods because of the computational intensity of the Fourier and inverse Fourier transformations used to move between spatial and frequency domains. Frequency domain methods also produced noisier images than spatial domain methods. Crack detection in the spatial domain using the Laplacian of Gaussian filter proved to be the fastest, most accurate, and most precise method, and it resulted in the finest detectable crack width. The Laplacian of Gaussian filter in spatial domain is recommended for future applications of real-time crack detection using UAS. Full article
(This article belongs to the Special Issue Intelligent Infrastructures)
Show Figures

Graphical abstract

16 pages, 3844 KiB  
Article
An Image Recognition-Based Approach to Actin Cytoskeleton Quantification
by Yi Liu, Keyvan Mollaeian and Juan Ren
Electronics 2018, 7(12), 443; https://doi.org/10.3390/electronics7120443 - 17 Dec 2018
Cited by 15 | Viewed by 6534
Abstract
Quantification of the actin cytoskeleton is of prime importance to unveil the cellular force sensing and transduction mechanism. Although fluorescence imaging provides a convenient tool for observing the morphology of the actin cytoskeleton, due to the lack of approaches to accurate actin cytoskeleton [...] Read more.
Quantification of the actin cytoskeleton is of prime importance to unveil the cellular force sensing and transduction mechanism. Although fluorescence imaging provides a convenient tool for observing the morphology of the actin cytoskeleton, due to the lack of approaches to accurate actin cytoskeleton quantification, the dynamics of mechanotransduction is still poorly understood. Currently, the existing image-based actin cytoskeleton analysis tools are either incapable of quantifying both the orientation and the quantity of the actin cytoskeleton simultaneously or the quantified results are subject to analysis artifacts. In this study, we propose an image recognition-based actin cytoskeleton quantification (IRAQ) approach, which quantifies both the actin cytoskeleton orientation and quantity by using edge, line, and brightness detection algorithms. The actin cytoskeleton is quantified through three parameters: the partial actin-cytoskeletal deviation (PAD), the total actin-cytoskeletal deviation (TAD), and the average actin-cytoskeletal intensity (AAI). First, Canny and Sobel edge detectors are applied to skeletonize the actin cytoskeleton images, then PAD and TAD are quantified using the line directions detected by Hough transform, and AAI is calculated through the summational brightness over the detected cell area. To verify the quantification accuracy, the proposed IRAQ was applied to six artificially-generated actin cytoskeleton mesh work models. The average error for both the quantified PAD and TAD was less than 1.22 . Then, IRAQ was implemented to quantify the actin cytoskeleton of NIH/3T3 cells treated with an F-actin inhibitor (latrunculin B). The quantification results suggest that the local and total actin-cytoskeletal organization became more disordered with the increase of latrunculin B dosage, and the quantity of the actin cytoskeleton showed a monotonically decreasing relation with latrunculin B dosage. Full article
(This article belongs to the Section Bioelectronics)
Show Figures

Graphical abstract

12 pages, 735 KiB  
Article
Direction Estimation Model for Gaze Controlled Systems
by Anjana Sharma and Pawanesh Abrol
J. Eye Mov. Res. 2016, 9(6), 1-12; https://doi.org/10.16910/jemr.9.6.5 (registering DOI) - 30 Sep 2016
Cited by 4 | Viewed by 79
Abstract
Gaze detection requires estimation of the position and the relation between user’s pupil and glint. In this research paper, a Gaze Direction Estimation (GDE) model, a feature based shape method has been proposed for the comparative analysis of two standard edge detectors canny [...] Read more.
Gaze detection requires estimation of the position and the relation between user’s pupil and glint. In this research paper, a Gaze Direction Estimation (GDE) model, a feature based shape method has been proposed for the comparative analysis of two standard edge detectors canny and sobel for estimating position of the glint coordinates and subsequently gaze direction based on the different human eye images dataset. The results indicate fairly good percentage of the cases where the correct glint coordinates and correct gaze direction quadrants have been estimated by the canny edge detector as it performs better than the sobel operator in most cases. These results can further be used for improving the accuracy and performance of different eye gaze based controlled systems. Full article
Show Figures

Figure 1

Back to TopTop