Special Issue "Convolutional Neural Networks and Vision Applications"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 31 March 2021.

Special Issue Editor

Prof. Dr. D. J. Lee
Website
Guest Editor
450 Engineering Building, Department of Electrical and Computer Engineering, Brigham Young University, Provo, UT 84602-4099, USA
Interests: object recognition, hardware friendly computer vision algorithms, real-time robotic vision applications

Special Issue Information

Dear Colleagues,

Processing speed is critical for visual inspection automation and mobile visual computing applications. Many powerful and sophisticated computer vision algorithms generate accurate results but require high computational power or resources and are not entirely suitable for real-time vision applications. On the other hand, there are vision algorithms and convolutional neural networks that perform at camera frame rates but with moderately reduced accuracy, which is arguably more applicable for real-time vision applications. This special issue is for research related to the design, optimization, and implementation of machine learning-based vision algorithms or convolutional neural networks that are suitable for real-time vision applications.

General topics covered in this special issue include, but are not limited to:

  • Optimization of software-based vision algorithms
  • CNN architecture optimizations for real-time performance
  • CNN acceleration through approximate computing
  • CNN applications that require real-time performance
  • Tradeoff analysis between speed and accuracy in CNN
  • GPU-based implementations for real-time CNN performance
  • FPGA-based implementations for real-time CNN performance
  • Embedded vision systems for applications that require real-time performance
  • Machine vision applications that require real-time performance

Prof. Dr. D. J. Lee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Defect Detection in Printed Circuit Boards Using You-Only-Look-Once Convolutional Neural Networks
Electronics 2020, 9(9), 1547; https://doi.org/10.3390/electronics9091547 - 22 Sep 2020
Abstract
In this study, a deep learning algorithm based on the you-only-look-once (YOLO) approach is proposed for the quality inspection of printed circuit boards (PCBs). The high accuracy and efficiency of deep learning algorithms has resulted in their increased adoption in every field. Similarly, [...] Read more.
In this study, a deep learning algorithm based on the you-only-look-once (YOLO) approach is proposed for the quality inspection of printed circuit boards (PCBs). The high accuracy and efficiency of deep learning algorithms has resulted in their increased adoption in every field. Similarly, accurate detection of defects in PCBs by using deep learning algorithms, such as convolutional neural networks (CNNs), has garnered considerable attention. In the proposed method, highly skilled quality inspection engineers first use an interface to record and label defective PCBs. The data are then used to train a YOLO/CNN model to detect defects in PCBs. In this study, 11,000 images and a network of 24 convolutional layers and 2 fully connected layers were used. The proposed model achieved a defect detection accuracy of 98.79% in PCBs with a batch size of 32. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

Open AccessArticle
Detection and Localization of Overlapped Fruits Application in an Apple Harvesting Robot
Electronics 2020, 9(6), 1023; https://doi.org/10.3390/electronics9061023 - 21 Jun 2020
Abstract
For yield measurement of an apple orchard or the mechanical harvesting of apples, there needs to be accurate identification of the target apple fruit. However, in a natural scene, affected by the apple’s growth posture and camera position, there are many kinds of [...] Read more.
For yield measurement of an apple orchard or the mechanical harvesting of apples, there needs to be accurate identification of the target apple fruit. However, in a natural scene, affected by the apple’s growth posture and camera position, there are many kinds of apple images, such as overlapped apples; mutual shadows or leaves; stems; etc. It is a challenge to accurately locate overlapped apples. They will influence the positioning time and recognition efficiency and then affect the harvesting efficiency of apple-harvesting robots or the accuracy of orchard yield measurement. In response to this problem, an overlapped circle positioning method based on local maxima is proposed. First, the apple image is transformed into the Lab color space and segmented by the K-means algorithm. Second, some morphological processes, like erosion and dilation, are implemented to abstract the outline of the apples. Then image points are divided into central points; edge points; or outer points. Third, a fast algorithm is used to calculate every internal point’s minimum distance from the edge. Then, the centers of the apples are obtained by finding the maxima among these distances. Last, the radii are acquired by figuring out the minimum distance between the center and the edge. Thus, positioning is achieved. Experimental results showed that this method can locate overlapped apples accurately and quickly when the apple contour was complete; and this has certain practicability. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

Open AccessFeature PaperArticle
SR-SYBA: A Scale and Rotation Invariant Synthetic Basis Feature Descriptor with Low Memory Usage
Electronics 2020, 9(5), 810; https://doi.org/10.3390/electronics9050810 - 15 May 2020
Abstract
Feature description has an important role in image matching and is widely used for a variety of computer vision applications. As an efficient synthetic basis feature descriptor, SYnthetic BAsis (SYBA) requires low computational complexity and provides accurate matching results. However, the number of [...] Read more.
Feature description has an important role in image matching and is widely used for a variety of computer vision applications. As an efficient synthetic basis feature descriptor, SYnthetic BAsis (SYBA) requires low computational complexity and provides accurate matching results. However, the number of matched feature points generated by SYBA suffers from large image scaling and rotation variations. In this paper, we improve SYBA’s scale and rotation invariance by adding an efficient pre-processing operation. The proposed algorithm, SR-SYBA, represents the scale of the feature region with the location of maximum gradient response along the radial direction in Log-polar coordinate system. Based on this scale representation, it normalizes all feature regions to the same reference scale to provide scale invariance. The orientation of the feature region is represented as the orientation of the vector from the center of the feature region to its intensity centroid. Based on this orientation representation, all feature regions are rotated to the same reference orientation to provide rotation invariance. The original SYBA descriptor is then applied to the scale and orientation normalized feature regions for description and matching. Experiment results show that SR-SYBA greatly improves SYBA for image matching applications with scaling and rotation variations. SR-SYBA obtains comparable or better performance in terms of matching rate compared to the mainstream algorithms while still maintains its advantages of using much less storage and simpler computations. SR-SYBA is applied to a vision-based measurement application to demonstrate its performance for image matching. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

Open AccessArticle
Smart Camera for Quality Inspection and Grading of Food Products
Electronics 2020, 9(3), 505; https://doi.org/10.3390/electronics9030505 - 19 Mar 2020
Abstract
Due to the increasing consumption of food products and demand for food quality and safety, most food processing facilities in the United States utilize machines to automate their processes, such as cleaning, inspection and grading, packing, storing, and shipping. Machine vision technology has [...] Read more.
Due to the increasing consumption of food products and demand for food quality and safety, most food processing facilities in the United States utilize machines to automate their processes, such as cleaning, inspection and grading, packing, storing, and shipping. Machine vision technology has been a proven solution for inspection and grading of food products since the late 1980s. The remaining challenges, especially for small to midsize facilities, include the system and operating costs, demand for high-skilled workers for complicated configuration and operation and, in some cases, unsatisfactory results. This paper focuses on the development of an embedded solution with learning capability to alleviate these challenges. Three simple application cases are included to demonstrate the operation of this unique solution. Two datasets of more challenging cases were created to analyze and demonstrate the performance of our visual inspection algorithm. One dataset includes infrared images of Medjool dates of four levels of skin delamination for surface quality grading. The other one consists of grayscale images of oysters with varying shape for shape quality evaluation. Our algorithm achieved a grading accuracy of 95.0% on the date dataset and 98.6% on the oyster dataset, both easily surpassed manual grading, which constantly faces the challenges of human fatigue or other distractions. Details of the design and functions of our smart camera and our simple visual inspection algorithm are discussed in this paper. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Optimization and Implementation of Synthetic Basis Feature Descriptor on FPGA
Electronics 2020, 9(3), 391; https://doi.org/10.3390/electronics9030391 - 27 Feb 2020
Abstract
Feature detection, description, and matching are crucial steps for many computer vision algorithms. These steps rely on feature descriptors to match image features across sets of images. Previous work has shown that our SYnthetic BAsis (SYBA) feature descriptor can offer superior performance to [...] Read more.
Feature detection, description, and matching are crucial steps for many computer vision algorithms. These steps rely on feature descriptors to match image features across sets of images. Previous work has shown that our SYnthetic BAsis (SYBA) feature descriptor can offer superior performance to other binary descriptors. This paper focused on various optimizations and hardware implementation of the newer and optimized version. The hardware implementation on a field-programmable gate array (FPGA) is a high-throughput low-latency solution which is critical for applications such as high-speed object detection and tracking, stereo vision, visual odometry, structure from motion, and optical flow. We compared our solution to other hardware designs of binary descriptors. We demonstrated that our implementation of SYBA as a feature descriptor in hardware offered superior image feature matching performance and used fewer resources than most binary feature descriptor implementations. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

Back to TopTop