Recent Trends in Applications of Artificial Intelligence for Image and Video Analysis

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 July 2022) | Viewed by 12209

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Blekinge Institute of Technology, SE-371 41 Karlskrona, Sweden
Interests: image processing; computer vision; evolutionary algorithms; artificial intelligence; handwritten document analysis; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. School of Computer Science and Applied Mathematics, University of the Witwatersrand, Johannesburg 2000, South Africa
2. School of Information Science and Technology, Southwest Jiaotong University, Chengdu, China
Interests: signal/image/video processing; visual computing; machine learning; cognitive computing; remote sensing data modelling and processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
CNRS LIMOS, University of Clermont Auvergne, Clermont-Ferrand, France
Interests: machine learning; data science; shape analysis; regression on manifolds; optimization

E-Mail Website
Guest Editor
Department of Mechatronics, University of KTO Karatay, Konya, Turkey
Interests: image and video processing; machine learning; deep learning; medical image analyses; remote sensing; evolutionary algorithms

Special Issue Information

Dear Colleagues,

We are inviting submissions to the Special Issue “Recent Trends in Applications of Artificial Intelligence for Image and Video Analysis.”

The amount of image and video data created and recorded has been rapidly growing, and extracting relevant insights related to events of significance from these data has been an important topic of interest in the research community. To reduce human efforts and the burden on human intelligence, computational intelligence systems play a crucial role in solving real-world complex problems. Computational intelligence systems employ one or more Artificial Intelligence techniques such as neural networks, support vector machines, deep learning, and evolutionary algorithms. It may also be necessary to employ a hybrid system combining techniques to solve a problem. This Special Issue provides an opportunity for researchers to address broad challenges on both theoretical and application aspects of Artificial Intelligence in image and video processing.

The objective of this Special Issue is to explore the theory and application of Artificial Intelligence models for image and video analysis applications. We invite researchers to contribute original research and review articles that will motivate the continuing efforts in the application of Artificial Intelligence frameworks to resolve image and video processing problems. The topics of this Special Issue on “Recent Trends in Applications of Artificial Intelligence for Image and Video Analysis” explicitly include (but are not limited to) the following aspects:

Big Multimedia Dataset and Its Applications;

Image/Video Analysis;

Artificial Intelligence;

Object Detection and Tracking;

Pattern Recognition;

Anomaly Detection in Images and Videos;

Image segmentation;

Expert systems;

Deep Learning;

Transfer Learning;

Explainable Artificial Intelligence;

Evolutionary Computation;

Hybrid Artificial Intelligence models.

Dr. Hüseyin Kusetogullari
Prof. Dr. Turgay Celik
Prof. Dr. Chafik Samir
Dr. Amir Yavariabdi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

10 pages, 4698 KiB  
Article
Static Video Compression’s Influence on Neural Network Performance
by Vishnu Sai Sankeerth Gowrisetty and Anil Fernando
Electronics 2023, 12(1), 8; https://doi.org/10.3390/electronics12010008 - 20 Dec 2022
Cited by 1 | Viewed by 1284
Abstract
The concept of action recognition in smart security heavily relies on deep learning and artificial intelligence to make predictions about actions of humans. To draw appropriate conclusions from these hypotheses, a large amount of information is required. The data in question are often [...] Read more.
The concept of action recognition in smart security heavily relies on deep learning and artificial intelligence to make predictions about actions of humans. To draw appropriate conclusions from these hypotheses, a large amount of information is required. The data in question are often a video feed, and there is a direct relationship between increased data volume and more-precise decision-making. We seek to determine how far a static video can be compressed before the neural network’s capacity to predict the action in the video is lost. To find this, videos are compressed by lowering the bitrate using FFMPEG. In parallel, a convolutional neural network model is trained to recognise action in the videos and is tested on the compressed videos until the neural network fails to predict the action observed in the videos. The results reveal that bitrate compression has no linear relationship with neural network performance. Full article
Show Figures

Figure 1

16 pages, 10080 KiB  
Article
Meta Classification Model of Surface Appearance for Small Dataset Using Parallel Processing
by Roie Kazoom, Raz Birman and Ofer Hadar
Electronics 2022, 11(21), 3426; https://doi.org/10.3390/electronics11213426 - 22 Oct 2022
Viewed by 1204
Abstract
Machine learning algorithms have become a very essential tool in the fields of math and engineering, as well as for industrial purposes (fabric, medicine, sport, etc.). This research leverages classical machine learning algorithms for innovative accurate and efficient fabric protrusion detection. We present [...] Read more.
Machine learning algorithms have become a very essential tool in the fields of math and engineering, as well as for industrial purposes (fabric, medicine, sport, etc.). This research leverages classical machine learning algorithms for innovative accurate and efficient fabric protrusion detection. We present an approach for improving model training with a small dataset. We use a few classic statistics machine learning algorithms (decision trees, logistic regression, etc.) and a fully connected neural network (NN) model. We also present an approach to optimize a model accuracy rate and execution time for finding the best accuracy using parallel processing with Dask (Python). Full article
Show Figures

Figure 1

16 pages, 11996 KiB  
Article
Deep Learning Spatial-Spectral Classification of Remote Sensing Images by Applying Morphology-Based Differential Extinction Profile (DEP)
by Nafiseh Kakhani, Mehdi Mokhtarzade and Mohammad Javad Valadan Zoej
Electronics 2021, 10(23), 2893; https://doi.org/10.3390/electronics10232893 - 23 Nov 2021
Cited by 2 | Viewed by 1719
Abstract
Since the technology of remote sensing has been improved recently, the spatial resolution of satellite images is getting finer. This enables us to precisely analyze the small complex objects in a scene through remote sensing images. Thus, the need to develop new, efficient [...] Read more.
Since the technology of remote sensing has been improved recently, the spatial resolution of satellite images is getting finer. This enables us to precisely analyze the small complex objects in a scene through remote sensing images. Thus, the need to develop new, efficient algorithms like spatial-spectral classification methods is growing. One of the most successful approaches is based on extinction profile (EP), which can extract contextual information from remote sensing data. Moreover, deep learning classifiers have drawn attention in the remote sensing community in the past few years. Recent progress has shown the effectiveness of deep learning at solving different problems, particularly segmentation tasks. This paper proposes a novel approach based on a new concept, which is differential extinction profile (DEP). DEP makes it possible to have an input feature vector with both spectral and spatial information. The input vector is then fed into a proposed straightforward deep-learning-based classifier to produce a thematic map. The approach is carried out on two different urban datasets from Pleiades and World-View 2 satellites. In order to prove the capabilities of the suggested approach, we compare the final results to the results of other classification strategies with different input vectors and various types of common classifiers, such as support vector machine (SVM) and random forests (RF). It can be concluded that the proposed approach is significantly improved in terms of three kinds of criteria, which are overall accuracy, Kappa coefficient, and total disagreement. Full article
Show Figures

Figure 1

15 pages, 4688 KiB  
Article
The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound Classification
by Salih Sarp, Murat Kuzlu, Emmanuel Wilson, Umit Cali and Ozgur Guler
Electronics 2021, 10(12), 1406; https://doi.org/10.3390/electronics10121406 - 11 Jun 2021
Cited by 31 | Viewed by 3458
Abstract
Artificial Intelligence (AI) has been among the most emerging research and industrial application fields, especially in the healthcare domain, but operated as a black-box model with a limited understanding of its inner working over the past decades. AI algorithms are, in large part, [...] Read more.
Artificial Intelligence (AI) has been among the most emerging research and industrial application fields, especially in the healthcare domain, but operated as a black-box model with a limited understanding of its inner working over the past decades. AI algorithms are, in large part, built on weights calculated as a result of large matrix multiplications. It is typically hard to interpret and debug the computationally intensive processes. Explainable Artificial Intelligence (XAI) aims to solve black-box and hard-to-debug approaches through the use of various techniques and tools. In this study, XAI techniques are applied to chronic wound classification. The proposed model classifies chronic wounds through the use of transfer learning and fully connected layers. Classified chronic wound images serve as input to the XAI model for an explanation. Interpretable results can help shed new perspectives to clinicians during the diagnostic phase. The proposed method successfully provides chronic wound classification and its associated explanation to extract additional knowledge that can also be interpreted by non-data-science experts, such as medical scientists and physicians. This hybrid approach is shown to aid with the interpretation and understanding of AI decision-making processes. Full article
Show Figures

Figure 1

19 pages, 68058 KiB  
Article
FastUAV-NET: A Multi-UAV Detection Algorithm for Embedded Platforms
by Amir Yavariabdi, Huseyin Kusetogullari, Turgay Celik and Hasan Cicek
Electronics 2021, 10(6), 724; https://doi.org/10.3390/electronics10060724 - 19 Mar 2021
Cited by 25 | Viewed by 3168
Abstract
In this paper, a real-time deep learning-based framework for detecting and tracking Unmanned Aerial Vehicles (UAVs) in video streams captured by a fixed-wing UAV is proposed. The proposed framework consists of two steps, namely intra-frame multi-UAV detection and the inter-frame multi-UAV tracking. In [...] Read more.
In this paper, a real-time deep learning-based framework for detecting and tracking Unmanned Aerial Vehicles (UAVs) in video streams captured by a fixed-wing UAV is proposed. The proposed framework consists of two steps, namely intra-frame multi-UAV detection and the inter-frame multi-UAV tracking. In the detection step, a new multi-scale UAV detection Convolutional Neural Network (CNN) architecture based on a shallow version of You Only Look Once version 3 (YOLOv3-tiny) widened by Inception blocks is designed to extract local and global features from input video streams. Here, the widened multi-UAV detection network architecture is termed as FastUAV-NET and aims to improve UAV detection accuracy while preserving computing time of one-step deep detection algorithms in the context of UAV-UAV tracking. To detect UAVs, the FastUAV-NET architecture uses five inception units and adopts a feature pyramid network to detect UAVs. To obtain a high frame rate, the proposed method is applied to every nth frame and then the detected UAVs are tracked in intermediate frames using scalable Kernel Correlation Filter algorithm. The results on the generated UAV-UAV dataset illustrate that the proposed framework obtains 0.7916 average precision with 29 FPS performance on Jetson-TX2. The results imply that the widening of CNN network is a much more effective way than increasing the depth of CNN and leading to a good trade-off between accurate detection and real-time performance. The FastUAV-NET model will be publicly available to the research community to further advance multi-UAV-UAV detection algorithms. Full article
Show Figures

Figure 1

Back to TopTop