Machine and Computer Vision Methods for Natural Images in Electronics and Interdisciplinary Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 14525

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Electrical Engineering, West Pomeranian University of Technology, 70-313 Szczecin, Poland
Interests: applied computer science; particularly image processing and analysis; computer vision and machine vision in automation and robotics; image quality assessment; video and signal processing applications in intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

Machine vision and computer vision methods may be considered very rapidly developing areas of research that integrate the interdisciplinary knowledge, making it possible to establish new scientific teams oriented toward new applications in many fields of science and technology. Starting from continuously growing Industry 4.0 solutions, through video surveillance, analysis of images from drones to novel applications in agriculture or even aquaculture, video analysis has become more and more popular. Due to the growing availability of affordable cameras, “smart” wearable electronics, and IoT solutions, often integrated with cameras and visual sensors, the methods of natural image processing are becoming even more important. The verification of some general-purpose image analysis methods for natural images, e.g., in mobile robotics, may also lead to worse results than those achieved for artificial images, leading to a so-called “reality gap”.

Since images acquired by cameras may contains various distortions that are not always the same as those in synthetic images, their presence and amount should also be considered in terms of image quality, potentially influencing the results of their further analysis. In some embedded systems, as well as in many industrial applications, a crucial role is also related to the “explainability” of algorithms, excluding the potential applications of some deep learning solutions.

The aim of this Special Issue on “Machine and Computer Vision Methods for Natural Images in Electronics and Interdisciplinary Applications” is to bring together the research communities interested in computer and machine vision from various departments and universities that focus on electronics, automation, and robotics, as well as computer science. 

Topics of interest for this Special Issue include but are not limited to:

  • Novel applications of computer vision in autonomous vehicles, video surveillance, and intelligent transportation systems;
  • Quality assessment of natural images;
  • Feature extraction and image registration based on novel handcrafted features;
  • Machine vision for video simultaneous localization and mapping (VSLAM) solutions;
  • Image-based navigation of unmanned aerial vehicles (UAVs) and other mobile robots;
  • Binarization and segmentation algorithms of natural images;
  • Fast image analysis methods for embedded solutions, e.g., using the Monte Carlo method;
  • Natural image analysis for Industry 4.0;
  • Exploration of data acquired using various sensors for non-destructive evaluation and diagnostics purposes (e.g., thermovision) using image analysis methods;
  • Applications of natural image analysis in industry and agriculture.

Dr. Krzysztof Okarma
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • natural images
  • image analysis
  • machine vision
  • video analysis
  • industrial cameras
  • image quality
  • visual inspection and diagnostics
  • industrial and robotic vision systems

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1769 KiB  
Article
Analysis of Image Preprocessing and Binarization Methods for OCR-Based Detection and Classification of Electronic Integrated Circuit Labeling
by Kamil Maliński and Krzysztof Okarma
Electronics 2023, 12(11), 2449; https://doi.org/10.3390/electronics12112449 - 29 May 2023
Cited by 3 | Viewed by 2559
Abstract
Automatic recognition and classification of electronic integrated circuits based on optical character recognition combined with the analysis of the shape of their housings are essential to machine vision methods supporting the production of electronic parts, especially small-volume ones in the through-hole technology, characteristic [...] Read more.
Automatic recognition and classification of electronic integrated circuits based on optical character recognition combined with the analysis of the shape of their housings are essential to machine vision methods supporting the production of electronic parts, especially small-volume ones in the through-hole technology, characteristic of printed circuit boards. Since such methods utilize binary images, applying appropriate image preprocessing and thresholding methods significantly influences the obtained results, particularly in uncontrolled illumination conditions. Therefore, the examination of various adaptive image binarization algorithms for this purpose is conducted in this paper, together with the experimental verification of the proposed method based on the pixel voting approach. Full article
Show Figures

Figure 1

22 pages, 1797 KiB  
Article
No-Reference Image Quality Assessment Using the Statistics of Global and Local Image Features
by Domonkos Varga
Electronics 2023, 12(7), 1615; https://doi.org/10.3390/electronics12071615 - 29 Mar 2023
Cited by 4 | Viewed by 2416
Abstract
Methods of image quality assessment are widely used for ranking computer vision algorithms or controlling the perceptual quality of video and streaming applications. The ever-increasing number of digital images has encouraged the research in this field at an accelerated pace in recent decades. [...] Read more.
Methods of image quality assessment are widely used for ranking computer vision algorithms or controlling the perceptual quality of video and streaming applications. The ever-increasing number of digital images has encouraged the research in this field at an accelerated pace in recent decades. After the appearance of convolutional neural networks, many researchers have paid attention to different deep architectures to devise no-reference image quality assessment algorithms. However, many systems still rely on handcrafted features to ensure interpretability and restrict the consumption of resources. In this study, our efforts are focused on creating a quality-aware feature vector containing information about both global and local image features. Specifically, the research results of visual physiology indicate that the human visual system first quickly and automatically creates a global perception before gradually focusing on certain local areas to judge the quality of an image. Specifically, a broad spectrum of statistics extracted from global and local image features is utilized to represent the quality-aware aspects of a digital image from various points of view. The experimental results demonstrate that our method’s predicted quality ratings relate strongly with the subjective quality ratings. In particular, the introduced algorithm was compared with 16 other well-known advanced methods and outperformed them by a large margin on 9 accepted benchmark datasets in the literature: CLIVE, KonIQ-10k, SPAQ, BIQ2021, TID2008, TID2013, MDID, KADID-10k, and GFIQA-20k, which are considered de facto standards and generally accepted in image quality assessment. Full article
Show Figures

Figure 1

16 pages, 2674 KiB  
Article
Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency
by Domonkos Varga
Electronics 2022, 11(4), 559; https://doi.org/10.3390/electronics11040559 - 12 Feb 2022
Cited by 11 | Viewed by 3462
Abstract
The purpose of image quality assessment is to estimate digital images’ perceptual quality coherent with human judgement. Over the years, many structural features have been utilized or proposed to quantify the degradation of an image in the presence of various noise types. Image [...] Read more.
The purpose of image quality assessment is to estimate digital images’ perceptual quality coherent with human judgement. Over the years, many structural features have been utilized or proposed to quantify the degradation of an image in the presence of various noise types. Image gradient is an obvious and very popular tool in the literature to quantify these changes in the images. However, gradient is able to characterize images locally. On the other hand, results from previous studies indicate that global contents of a scene are analyzed before the local features by the human visual system. Relying on these features of the human visual system, we propose a full-reference image quality assessment metric that characterizes the global changes of an image by the Grünwald–Letnikov derivatives and the local changes by image gradients. Moreover, visual saliency is also utilized for weighting the changes in the images and emphasizing those areas of the image which are salient to the human visual system. To prove the efficiency of the proposed method, massive experiments were carried out on publicly available benchmark image quality assessment databases. Full article
Show Figures

Figure 1

20 pages, 553 KiB  
Article
No-Reference Video Quality Assessment Based on Benford’s Law and Perceptual Features
by Domonkos Varga
Electronics 2021, 10(22), 2768; https://doi.org/10.3390/electronics10222768 - 12 Nov 2021
Cited by 3 | Viewed by 2396
Abstract
No-reference video quality assessment (NR-VQA) has piqued the scientific community’s interest throughout the last few decades, owing to its importance in human-centered interfaces. The goal of NR-VQA is to predict the perceptual quality of digital videos without any information about their distortion-free counterparts. [...] Read more.
No-reference video quality assessment (NR-VQA) has piqued the scientific community’s interest throughout the last few decades, owing to its importance in human-centered interfaces. The goal of NR-VQA is to predict the perceptual quality of digital videos without any information about their distortion-free counterparts. Over the past few decades, NR-VQA has become a very popular research topic due to the spread of multimedia content and video databases. For successful video quality evaluation, creating an effective video representation from the original video is a crucial step. In this paper, we propose a powerful feature vector for NR-VQA inspired by Benford’s law. Specifically, it is demonstrated that first-digit distributions extracted from different transform domains of the video volume data are quality-aware features and can be effectively mapped onto perceptual quality scores. Extensive experiments were carried out on two large, authentically distorted VQA benchmark databases. Full article
Show Figures

Figure 1

17 pages, 730 KiB  
Article
Analysis of Benford’s Law for No-Reference Quality Assessment of Natural, Screen-Content, and Synthetic Images
by Domonkos Varga
Electronics 2021, 10(19), 2378; https://doi.org/10.3390/electronics10192378 - 29 Sep 2021
Cited by 8 | Viewed by 2495
Abstract
With the tremendous growth and usage of digital images, no-reference image quality assessment is becoming increasingly important. This paper presents in-depth analysis of Benford’s law inspired first digit distribution feature vectors for no-reference quality assessment of natural, screen-content, and synthetic images in various [...] Read more.
With the tremendous growth and usage of digital images, no-reference image quality assessment is becoming increasingly important. This paper presents in-depth analysis of Benford’s law inspired first digit distribution feature vectors for no-reference quality assessment of natural, screen-content, and synthetic images in various viewpoints. Benford’s law makes a prediction for the probability distribution of first digits in natural datasets. It has been applied among others for detecting fraudulent income tax returns, detecting scientific fraud, election forensics, and image forensics. In particular, our analysis is based on first digit distributions in multiple domains (wavelet coefficients, DCT coefficients, singular values, etc.) as feature vectors and the extracted features are mapped onto image quality scores. Extensive experiments have been carried out on seven large image quality benchmark databases. It has been demonstrated that first digit distributions are quality-aware features, and it is possible to reach or outperform the state-of-the-art with them. Full article
Show Figures

Figure 1

Back to TopTop