sensors-logo

Journal Browser

Journal Browser

Machine and Deep Learning in Sensing and Imaging: Emerging Trends, Challenges and Opportunities

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (30 December 2022) | Viewed by 63329
Please contact the Guest Editor or the Section Managing Editor at ([email protected]) for any queries.

Special Issue Editors

UMR 1253, iBrain, Université de Tours, INSERM, 37000 Tours, France
Interests: signal and image processing; machine/deep learning; medical imaging
Special Issues, Collections and Topics in MDPI journals
Electrical Engineering Department, University August 20, 1955, BP 26, El Hadaiek, Skikda Road 21000, Algeria
Interests: image processing; pattern recognition; deep learning; biometrics

Special Issue Information

Dear Colleagues,

Artificial intelligence is everywhere in sensing and imaging, from acquisition to data processing, and offers hope for meeting a growing demand for automatic data analysis and interpretation.

The fields concerned are varied and range from medical diagnosis to facial recognition, including smart sensors, information and communication systems, via the Internet.

A wide range of machine learning methods have been applied over the past 20 years to analyze and interpret data of various kinds from sensors embedded in different tools and devices, or from portable sensor devices. Big data, as well as recent methods based on deep learning, have allowed the emergence of high accuracy algorithms to meet an ever-growing need for processing and interpretation.

This Special Issue encourages authors, from academia and industry, to submit new research results regarding methods and applications in this field. The Special Issue topics include, but are not limited to:

  • Medical imaging
  • Biomedical engineering
  • Brain imaging and therapy
  • Data fusion techniques
  • Information and communication systems
  • Smarts sensors and IoT
  • Applications

Prof. Dr. Abdeldjalil Ouahabi
Dr. Amir Benzaoui
Dr. Sébastien Jacques
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine/deep learning
  • Signal and image processing
  • Medical imaging
  • Biometrics
  • Communication systems and networks
  • Smart sensors and IoT
  • Compression
  • Semantic indexing
  • Steganography

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 31080 KiB  
Article
Exploration of Semantic Label Decomposition and Dataset Size in Semantic Indoor Scenes Synthesis via Optimized Residual Generative Adversarial Networks
by Hatem Ibrahem, Ahmed Salem and Hyun-Soo Kang
Sensors 2022, 22(21), 8306; https://doi.org/10.3390/s22218306 - 29 Oct 2022
Cited by 1 | Viewed by 1742
Abstract
In this paper, we revisit the paired image-to-image translation using the conditional generative adversarial network, the so-called “Pix2Pix”, and propose efficient optimization techniques for the architecture and the training method to maximize the architecture’s performance to boost the realism of the generated images. [...] Read more.
In this paper, we revisit the paired image-to-image translation using the conditional generative adversarial network, the so-called “Pix2Pix”, and propose efficient optimization techniques for the architecture and the training method to maximize the architecture’s performance to boost the realism of the generated images. We propose a generative adversarial network-based technique to create new artificial indoor scenes using a user-defined semantic segmentation map as an input to define the location, shape, and category of each object in the scene, exactly similar to Pix2Pix. We train different residual connections-based architectures of the generator and discriminator on the NYU depth-v2 dataset and a selected indoor subset from the ADE20K dataset, showing that the proposed models have fewer parameters, less computational complexity, and can generate better quality images than the state of the art methods following the same technique to generate realistic indoor images. We also prove that using extra specific labels and more training samples increases the quality of the generated images; however, the proposed residual connections-based models can learn better from small datasets (i.e., NYU depth-v2) and can improve the realism of the generated images in training on bigger datasets (i.e., ADE20K indoor subset) in comparison to Pix2Pix. The proposed method achieves an LPIPS value of 0.505 and an FID value of 81.067, generating better quality images than that produced by Pix2Pix and other recent paired Image-to-image translation methods and outperforming them in terms of LPIPS and FID. Full article
Show Figures

Figure 1

17 pages, 2852 KiB  
Article
Evaluation of Effective Class-Balancing Techniques for CNN-Based Assessment of Aphanomyces Root Rot Resistance in Pea (Pisum sativum L.)
by L. G. Divyanth, Afef Marzougui, Maria Jose González-Bernal, Rebecca J. McGee, Diego Rubiales and Sindhuja Sankaran
Sensors 2022, 22(19), 7237; https://doi.org/10.3390/s22197237 - 24 Sep 2022
Cited by 11 | Viewed by 1799
Abstract
Aphanomyces root rot (ARR) is a devastating disease that affects the production of pea. The plants are prone to infection at any growth stage, and there are no chemical or cultural controls. Thus, the development of resistant pea cultivars is important. Phenomics technologies [...] Read more.
Aphanomyces root rot (ARR) is a devastating disease that affects the production of pea. The plants are prone to infection at any growth stage, and there are no chemical or cultural controls. Thus, the development of resistant pea cultivars is important. Phenomics technologies to support the selection of resistant cultivars through phenotyping can be valuable. One such approach is to couple imaging technologies with deep learning algorithms that are considered efficient for the assessment of disease resistance across a large number of plant genotypes. In this study, the resistance to ARR was evaluated through a CNN-based assessment of pea root images. The proposed model, DeepARRNet, was designed to classify the pea root images into three classes based on ARR severity scores, namely, resistant, intermediate, and susceptible classes. The dataset consisted of 1581 pea root images with a skewed distribution. Hence, three effective data-balancing techniques were identified to solve the prevalent problem of unbalanced datasets. Random oversampling with image transformations, generative adversarial network (GAN)-based image synthesis, and loss function with class-weighted ratio were implemented during the training process. The result indicated that the classification F1-score was 0.92 ± 0.03 when GAN-synthesized images were added, 0.91 ± 0.04 for random resampling, and 0.88 ± 0.05 when class-weighted loss function was implemented, which was higher than when an unbalanced dataset without these techniques were used (0.83 ± 0.03). The systematic approaches evaluated in this study can be applied to other image-based phenotyping datasets, which can aid the development of deep-learning models with improved performance. Full article
Show Figures

Figure 1

20 pages, 6611 KiB  
Article
Generative Adversarial Networks and Data Clustering for Likable Drone Design
by Lee J. Yamin and Jessica R. Cauchard
Sensors 2022, 22(17), 6433; https://doi.org/10.3390/s22176433 - 26 Aug 2022
Cited by 3 | Viewed by 2037
Abstract
Novel applications for human-drone interaction demand new design approaches, such as social drones that need to be perceived as likable by users. However, given the complexity of the likability perception process, gathering such design information from the interaction context is intricate. This work [...] Read more.
Novel applications for human-drone interaction demand new design approaches, such as social drones that need to be perceived as likable by users. However, given the complexity of the likability perception process, gathering such design information from the interaction context is intricate. This work leverages deep learning-based techniques to generate novel likable drone images. We collected a drone image database (N=360) applicable for design research and assessed the drone’s likability ratings in a user study (N=379). We employed two clustering methodologies: 1. likability-based, which resulted in non-likable, neutral, and likable drone clusters; and 2. feature-based (VGG, PCA), which resulted in drone clusters characterized by visual similarity; both clustered using the K-means algorithm. A characterization process identified three drone features: colorfulness, animal-like representation, and emotional expressions through facial features, which affect drone likability, going beyond prior research. We used the likable drone cluster (N=122) for generating new images using StyleGAN2-ADA and addressed the dataset size limitation using specific configurations and transfer learning. Our results were mitigated due to the dataset size; thus, we illustrate the feasibility of our approach by generating new images using the original database. Our findings demonstrate the effectiveness of Generative Adversarial Networks (GANs) exploitation for drone design, and to the best of our knowledge, this work is the first to suggest GANs for such application. Full article
Show Figures

Figure 1

19 pages, 9832 KiB  
Article
SMaTE: A Segment-Level Feature Mixing and Temporal Encoding Framework for Facial Expression Recognition
by Nayeon Kim, Sukhee Cho and Byungjun Bae
Sensors 2022, 22(15), 5753; https://doi.org/10.3390/s22155753 - 01 Aug 2022
Cited by 1 | Viewed by 1922
Abstract
Despite advanced machine learning methods, the implementation of emotion recognition systems based on real-world video content remains challenging. Videos may contain data such as images, audio, and text. However, the application of multimodal models using two or more types of data to real-world [...] Read more.
Despite advanced machine learning methods, the implementation of emotion recognition systems based on real-world video content remains challenging. Videos may contain data such as images, audio, and text. However, the application of multimodal models using two or more types of data to real-world video media (CCTV, illegally filmed content, etc.) lacking sound or subtitles is difficult. Although facial expressions in image sequences can be utilized in emotion recognition, the diverse identities of individuals in real-world content limits computational models of relationships between facial expressions. This study proposed a transformation model which employed a video vision transformer to focus on facial expression sequences in videos. It effectively understood and extracted facial expression information from the identities of individuals, instead of fusing multimodal models. The design entailed capture of higher-quality facial expression information through mixed-token embedding facial expression sequences augmented via various methods into a single data representation, and comprised two modules: spatial and temporal encoders. Further, temporal position embedding, focusing on relationships between video frames, was proposed and subsequently applied to the temporal encoder module. The performance of the proposed algorithm was compared with that of conventional methods on two emotion recognition datasets of video content, with results demonstrating its superiority. Full article
Show Figures

Figure 1

26 pages, 24319 KiB  
Article
Designing an AI-Based Virtual Try-On Web Application
by Davide Marelli, Simone Bianco and Gianluigi Ciocca
Sensors 2022, 22(10), 3832; https://doi.org/10.3390/s22103832 - 18 May 2022
Cited by 6 | Viewed by 8673
Abstract
In the last few years, Augmented Reality, Virtual Reality, and Artificial Intelligence (AI) have been increasingly employed in different application domains. Among them, the retail market presents the opportunity to allow people to check the appearance of accessories, makeup, hairstyle, hair color, and [...] Read more.
In the last few years, Augmented Reality, Virtual Reality, and Artificial Intelligence (AI) have been increasingly employed in different application domains. Among them, the retail market presents the opportunity to allow people to check the appearance of accessories, makeup, hairstyle, hair color, and clothes on themselves, exploiting virtual try-on applications. In this paper, we propose an eyewear virtual try-on experience based on a framework that leverages advanced deep learning-based computer vision techniques. The virtual try-on is performed on a 3D face reconstructed from a single input image. In designing our system, we started by studying the underlying architecture, components, and their interactions. Then, we assessed and compared existing face reconstruction approaches. To this end, we performed an extensive analysis and experiments for evaluating their design, complexity, geometry reconstruction errors, and reconstructed texture quality. The experiments allowed us to select the most suitable approach for our proposed try-on framework. Our system considers actual glasses and face sizes to provide a realistic fit estimation using a markerless approach. The user interacts with the system by using a web application optimized for desktop and mobile devices. Finally, we performed a usability study that showed an above-average score of our eyewear virtual try-on application. Full article
Show Figures

Figure 1

32 pages, 1561 KiB  
Article
Deep Unfolding of Iteratively Reweighted ADMM for Wireless RF Sensing
by Udaya S. K. P. Miriya Thanthrige, Peter Jung and Aydin Sezgin
Sensors 2022, 22(8), 3065; https://doi.org/10.3390/s22083065 - 15 Apr 2022
Cited by 2 | Viewed by 2041
Abstract
We address the detection of material defects, which are inside a layered material structure using compressive sensing-based multiple-input and multiple-output (MIMO) wireless radar. Here, strong clutter due to the reflection of the layered structure’s surface often makes the detection of the defects challenging. [...] Read more.
We address the detection of material defects, which are inside a layered material structure using compressive sensing-based multiple-input and multiple-output (MIMO) wireless radar. Here, strong clutter due to the reflection of the layered structure’s surface often makes the detection of the defects challenging. Thus, sophisticated signal separation methods are required for improved defect detection. In many scenarios, the number of defects that we are interested in is limited, and the signaling response of the layered structure can be modeled as a low-rank structure. Therefore, we propose joint rank and sparsity minimization for defect detection. In particular, we propose a non-convex approach based on the iteratively reweighted nuclear and 1-norm (a double-reweighted approach) to obtain a higher accuracy compared to the conventional nuclear norm and 1-norm minimization. To this end, an iterative algorithm is designed to estimate the low-rank and sparse contributions. Further, we propose deep learning-based parameter tuning of the algorithm (i.e., algorithm unfolding) to improve the accuracy and the speed of convergence of the algorithm. Our numerical results show that the proposed approach outperforms the conventional approaches in terms of mean squared errors of the recovered low-rank and sparse components and the speed of convergence. Full article
Show Figures

Figure 1

32 pages, 1461 KiB  
Article
Assessing Representation Learning and Clustering Algorithms for Computer-Assisted Image Annotation—Simulating and Benchmarking MorphoCluster
by Simon-Martin Schröder and Rainer Kiko
Sensors 2022, 22(7), 2775; https://doi.org/10.3390/s22072775 - 04 Apr 2022
Viewed by 1999
Abstract
Image annotation is a time-consuming and costly task. Previously, we published MorphoCluster as a novel image annotation tool to address problems of conventional, classifier-based image annotation approaches: their limited efficiency, training set bias and lack of novelty detection. MorphoCluster uses clustering and similarity [...] Read more.
Image annotation is a time-consuming and costly task. Previously, we published MorphoCluster as a novel image annotation tool to address problems of conventional, classifier-based image annotation approaches: their limited efficiency, training set bias and lack of novelty detection. MorphoCluster uses clustering and similarity search to enable efficient, computer-assisted image annotation. In this work, we provide a deeper analysis of this approach. We simulate the actions of a MorphoCluster user to avoid extensive manual annotation runs. This simulation is used to test supervised, unsupervised and transfer representation learning approaches. Furthermore, shrunken k-means and partially labeled k-means, two new clustering algorithms that are tailored specifically for the MorphoCluster approach, are compared to the previously used HDBSCAN*. We find that labeled training data improve the image representations, that unsupervised learning beats transfer learning and that all three clustering algorithms are viable options, depending on whether completeness, efficiency or runtime is the priority. The simulation results support our earlier finding that MorphoCluster is very efficient and precise. Within the simulation, more than five objects per simulated click are being annotated with 95% precision. Full article
Show Figures

Figure 1

17 pages, 6311 KiB  
Article
Deep Learning with Discriminative Margin Loss for Cross-Domain Consumer-to-Shop Clothes Retrieval
by Pendar Alirezazadeh, Fadi Dornaika and Abdelmalik Moujahid
Sensors 2022, 22(7), 2660; https://doi.org/10.3390/s22072660 - 30 Mar 2022
Cited by 6 | Viewed by 2062
Abstract
Consumer-to-shop clothes retrieval refers to the problem of matching photos taken by customers with their counterparts in the shop. Due to some problems, such as a large number of clothing categories, different appearances of clothing items due to different camera angles and shooting [...] Read more.
Consumer-to-shop clothes retrieval refers to the problem of matching photos taken by customers with their counterparts in the shop. Due to some problems, such as a large number of clothing categories, different appearances of clothing items due to different camera angles and shooting conditions, different background environments, and different body postures, the retrieval accuracy of traditional consumer-to-shop models is always low. With advances in convolutional neural networks (CNNs), the accuracy of garment retrieval has been significantly improved. Most approaches addressing this problem use single CNNs in conjunction with a softmax loss function to extract discriminative features. In the fashion domain, negative pairs can have small or large visual differences that make it difficult to minimize intraclass variance and maximize interclass variance with softmax. Margin-based softmax losses such as Additive Margin-Softmax (aka CosFace) improve the discriminative power of the original softmax loss, but since they consider the same margin for the positive and negative pairs, they are not suitable for cross-domain fashion search. In this work, we introduce the cross-domain discriminative margin loss (DML) to deal with the large variability of negative pairs in fashion. DML learns two different margins for positive and negative pairs such that the negative margin is larger than the positive margin, which provides stronger intraclass reduction for negative pairs. The experiments conducted on publicly available fashion datasets DARN and two benchmarks of the DeepFashion dataset—(1) Consumer-to-Shop Clothes Retrieval and (2) InShop Clothes Retrieval—confirm that the proposed loss function not only outperforms the existing loss functions but also achieves the best performance. Full article
Show Figures

Figure 1

14 pages, 3841 KiB  
Article
Template-Driven Knowledge Distillation for Compact and Accurate Periocular Biometrics Deep-Learning Models
by Fadi Boutros, Naser Damer, Kiran Raja, Florian Kirchbuchner and Arjan Kuijper
Sensors 2022, 22(5), 1921; https://doi.org/10.3390/s22051921 - 01 Mar 2022
Cited by 8 | Viewed by 2052
Abstract
This work addresses the challenge of building an accurate and generalizable periocular recognition model with a small number of learnable parameters. Deeper (larger) models are typically more capable of learning complex information. For this reason, knowledge distillation (kd) was previously proposed to carry [...] Read more.
This work addresses the challenge of building an accurate and generalizable periocular recognition model with a small number of learnable parameters. Deeper (larger) models are typically more capable of learning complex information. For this reason, knowledge distillation (kd) was previously proposed to carry this knowledge from a large model (teacher) into a small model (student). Conventional KD optimizes the student output to be similar to the teacher output (commonly classification output). In biometrics, comparison (verification) and storage operations are conducted on biometric templates, extracted from pre-classification layers. In this work, we propose a novel template-driven KD approach that optimizes the distillation process so that the student model learns to produce templates similar to those produced by the teacher model. We demonstrate our approach on intra- and cross-device periocular verification. Our results demonstrate the superiority of our proposed approach over a network trained without KD and networks trained with conventional (vanilla) KD. For example, the targeted small model achieved an equal error rate (EER) value of 22.2% on cross-device verification without KD. The same model achieved an EER of 21.9% with the conventional KD, and only 14.7% EER when using our proposed template-driven KD. Full article
Show Figures

Figure 1

16 pages, 2579 KiB  
Article
Recognition of Leaf Disease Using Hybrid Convolutional Neural Network by Applying Feature Reduction
by Prabhjot Kaur, Shilpi Harnal, Rajeev Tiwari, Shuchi Upadhyay, Surbhi Bhatia, Arwa Mashat and Aliaa M. Alabdali
Sensors 2022, 22(2), 575; https://doi.org/10.3390/s22020575 - 12 Jan 2022
Cited by 64 | Viewed by 6164
Abstract
Agriculture is crucial to the economic prosperity and development of India. Plant diseases can have a devastating influence towards food safety and a considerable loss in the production of agricultural products. Disease identification on the plant is essential for long-term agriculture sustainability. Manually [...] Read more.
Agriculture is crucial to the economic prosperity and development of India. Plant diseases can have a devastating influence towards food safety and a considerable loss in the production of agricultural products. Disease identification on the plant is essential for long-term agriculture sustainability. Manually monitoring plant diseases is difficult due to time limitations and the diversity of diseases. In the realm of agricultural inputs, automatic characterization of plant diseases is widely required. Based on performance out of all image-processing methods, is better suited for solving this task. This work investigates plant diseases in grapevines. Leaf blight, Black rot, stable, and Black measles are the four types of diseases found in grape plants. Several earlier research proposals using machine learning algorithms were created to detect one or two diseases in grape plant leaves; no one offers a complete detection of all four diseases. The photos are taken from the plant village dataset in order to use transfer learning to retrain the EfficientNet B7 deep architecture. Following the transfer learning, the collected features are down-sampled using a Logistic Regression technique. Finally, the most discriminant traits are identified with the highest constant accuracy of 98.7% using state-of-the-art classifiers after 92 epochs. Based on the simulation findings, an appropriate classifier for this application is also suggested. The proposed technique’s effectiveness is confirmed by a fair comparison to existing procedures. Full article
Show Figures

Figure 1

14 pages, 2794 KiB  
Article
Apple Leaf Disease Identification with a Small and Imbalanced Dataset Based on Lightweight Convolutional Networks
by Lili Li, Shujuan Zhang and Bin Wang
Sensors 2022, 22(1), 173; https://doi.org/10.3390/s22010173 - 28 Dec 2021
Cited by 24 | Viewed by 3353
Abstract
The intelligent identification and classification of plant diseases is an important research objective in agriculture. In this study, in order to realize the rapid and accurate identification of apple leaf disease, a new lightweight convolutional neural network RegNet was proposed. A series of [...] Read more.
The intelligent identification and classification of plant diseases is an important research objective in agriculture. In this study, in order to realize the rapid and accurate identification of apple leaf disease, a new lightweight convolutional neural network RegNet was proposed. A series of comparative experiments had been conducted based on 2141 images of 5 apple leaf diseases (rust, scab, ring rot, panonychus ulmi, and healthy leaves) in the field environment. To assess the effectiveness of the RegNet model, a series of comparison experiments were conducted with state-of-the-art convolutional neural networks (CNN) such as ShuffleNet, EfficientNet-B0, MobileNetV3, and Vision Transformer. The results show that RegNet-Adam with a learning rate of 0.0001 obtained an average accuracy of 99.8% on the validation set and an overall accuracy of 99.23% on the test set, outperforming all other pre-trained models. In other words, the proposed method based on transfer learning established in this research can realize the rapid and accurate identification of apple leaf disease. Full article
Show Figures

Figure 1

13 pages, 1382 KiB  
Article
Action Generative Networks Planning for Deformable Object with Raw Observations
by Ziqi Sheng, Kebing Jin, Zhihao Ma and Hankz-Hankui Zhuo
Sensors 2021, 21(13), 4552; https://doi.org/10.3390/s21134552 - 02 Jul 2021
Viewed by 1948
Abstract
Synthesizing plans for a deformable object to transit from initial observations to goal observations, both of which are represented by high-dimensional data (namely “raw” data), is challenging due to the difficulty of learning abstract state representations of raw data and transition models of [...] Read more.
Synthesizing plans for a deformable object to transit from initial observations to goal observations, both of which are represented by high-dimensional data (namely “raw” data), is challenging due to the difficulty of learning abstract state representations of raw data and transition models of continuous states and continuous actions. Even though there have been some approaches making remarkable progress regarding the planning problem, they often neglect actions between observations and are unable to generate action sequences from initial observations to goal observations. In this paper, we propose a novel algorithm framework, namely AGN. We first learn a state-abstractor model to abstract states from raw observations, a state-generator model to generate raw observations from states, a heuristic model to predict actions to be executed in current states, and a transition model to transform current states to next states after executing specific actions. After that, we directly generate plans for a deformable object by performing the four models. We evaluate our approach in continuous domains and show that our approach is effective with comparison to state-of-the-art algorithms. Full article
Show Figures

Figure 1

13 pages, 5582 KiB  
Article
Neural Network for Metal Detection Based on Magnetic Impedance Sensor
by Sungjae Ha, Dongwoo Lee, Hoijun Kim, Soonchul Kwon, EungJo Kim, Junho Yang and Seunghyun Lee
Sensors 2021, 21(13), 4456; https://doi.org/10.3390/s21134456 - 29 Jun 2021
Cited by 2 | Viewed by 3035
Abstract
The efficiency of the metal detection method using deep learning with data obtained from multiple magnetic impedance (MI) sensors was investigated. The MI sensor is a passive sensor that detects metal objects and magnetic field changes. However, when detecting a metal object, the [...] Read more.
The efficiency of the metal detection method using deep learning with data obtained from multiple magnetic impedance (MI) sensors was investigated. The MI sensor is a passive sensor that detects metal objects and magnetic field changes. However, when detecting a metal object, the amount of change in the magnetic field caused by the metal is small and unstable with noise. Consequently, there is a limit to the detectable distance. To effectively detect and analyze this distance, a method using deep learning was applied. The detection performances of a convolutional neural network (CNN) and a recurrent neural network (RNN) were compared from the data extracted from a self-impedance sensor. The RNN model showed better performance than the CNN model. However, in the shallow stage, the CNN model was superior compared to the RNN model. The performance of a deep-learning-based (DLB) metal detection network using multiple MI sensors was compared and analyzed. The network was detected using long short-term memory and CNN. The performance was compared according to the number of layers and the size of the metal sheet. The results are expected to contribute to sensor-based DLB detection technology. Full article
Show Figures

Figure 1

19 pages, 1843 KiB  
Article
Data Clustering Using Moth-Flame Optimization Algorithm
by Tribhuvan Singh, Nitin Saxena, Manju Khurana, Dilbag Singh, Mohamed Abdalla and Hammam Alshazly
Sensors 2021, 21(12), 4086; https://doi.org/10.3390/s21124086 - 14 Jun 2021
Cited by 29 | Viewed by 2982
Abstract
A k-means algorithm is a method for clustering that has already gained a wide range of acceptability. However, its performance extremely depends on the opening cluster centers. Besides, due to weak exploration capability, it is easily stuck at local optima. Recently, a new [...] Read more.
A k-means algorithm is a method for clustering that has already gained a wide range of acceptability. However, its performance extremely depends on the opening cluster centers. Besides, due to weak exploration capability, it is easily stuck at local optima. Recently, a new metaheuristic called Moth Flame Optimizer (MFO) is proposed to handle complex problems. MFO simulates the moths intelligence, known as transverse orientation, used to navigate in nature. In various research work, the performance of MFO is found quite satisfactory. This paper suggests a novel heuristic approach based on the MFO to solve data clustering problems. To validate the competitiveness of the proposed approach, various experiments have been conducted using Shape and UCI benchmark datasets. The proposed approach is compared with five state-of-art algorithms over twelve datasets. The mean performance of the proposed algorithm is superior on 10 datasets and comparable in remaining two datasets. The analysis of experimental results confirms the efficacy of the suggested approach. Full article
Show Figures

Figure 1

20 pages, 6068 KiB  
Article
Denoising and Motion Artifact Removal Using Deformable Kernel Prediction Neural Network for Color-Intensified CMOS
by Zhenghao Han, Li Li, Weiqi Jin, Xia Wang, Gangcheng Jiao, Xuan Liu and Hailin Wang
Sensors 2021, 21(11), 3891; https://doi.org/10.3390/s21113891 - 04 Jun 2021
Cited by 2 | Viewed by 2563
Abstract
Image intensifiers are used internationally as advanced military night-vision devices. They have better imaging performance in low-light-level conditions than CMOS/CCD. The intensified CMOS (ICMOS) was developed to satisfy the digital demand of image intensifiers. In order to make the ICMOS capable of color [...] Read more.
Image intensifiers are used internationally as advanced military night-vision devices. They have better imaging performance in low-light-level conditions than CMOS/CCD. The intensified CMOS (ICMOS) was developed to satisfy the digital demand of image intensifiers. In order to make the ICMOS capable of color imaging in low-light-level conditions, a liquid-crystal tunable filter based color imaging ICMOS was developed. Due to the time-division color imaging scheme, motion artifacts may be introduced when a moving target is in the scene. To solve this problem, a deformable kernel prediction neural network (DKPNN) is proposed for joint denoising and motion artifact removal, and a data generation method which generates images with color-channel motion artifacts is also proposed to train the DKPNN. The results show that, compared with other denoising methods, the proposed DKPNN performed better both on generated noisy data and on real noisy data. Therefore, the proposed DKPNN is more suitable for color ICMOS denoising and motion artifact removal. A new exploration was made for low-light-level color imaging schemes. Full article
Show Figures

Figure 1

30 pages, 6342 KiB  
Article
Adversarial Gaussian Denoiser for Multiple-Level Image Denoising
by Aamir Khan, Weidong Jin, Amir Haider, MuhibUr Rahman and Desheng Wang
Sensors 2021, 21(9), 2998; https://doi.org/10.3390/s21092998 - 24 Apr 2021
Cited by 13 | Viewed by 3482
Abstract
Image denoising is a challenging task that is essential in numerous computer vision and image processing problems. This study proposes and applies a generative adversarial network-based image denoising training architecture to multiple-level Gaussian image denoising tasks. Convolutional neural network-based denoising approaches come across [...] Read more.
Image denoising is a challenging task that is essential in numerous computer vision and image processing problems. This study proposes and applies a generative adversarial network-based image denoising training architecture to multiple-level Gaussian image denoising tasks. Convolutional neural network-based denoising approaches come across a blurriness issue that produces denoised images blurry on texture details. To resolve the blurriness issue, we first performed a theoretical study of the cause of the problem. Subsequently, we proposed an adversarial Gaussian denoiser network, which uses the generative adversarial network-based adversarial learning process for image denoising tasks. This framework resolves the blurriness problem by encouraging the denoiser network to find the distribution of sharp noise-free images instead of blurry images. Experimental results demonstrate that the proposed framework can effectively resolve the blurriness problem and achieve significant denoising efficiency than the state-of-the-art denoising methods. Full article
Show Figures

Figure 1

Review

Jump to: Research

34 pages, 3833 KiB  
Review
A Survey of Deep Convolutional Neural Networks Applied for Prediction of Plant Leaf Diseases
by Vijaypal Singh Dhaka, Sangeeta Vaibhav Meena, Geeta Rani, Deepak Sinwar, Kavita, Muhammad Fazal Ijaz and Marcin Woźniak
Sensors 2021, 21(14), 4749; https://doi.org/10.3390/s21144749 - 12 Jul 2021
Cited by 189 | Viewed by 11256
Abstract
In the modern era, deep learning techniques have emerged as powerful tools in image recognition. Convolutional Neural Networks, one of the deep learning tools, have attained an impressive outcome in this area. Applications such as identifying objects, faces, bones, handwritten digits, and traffic [...] Read more.
In the modern era, deep learning techniques have emerged as powerful tools in image recognition. Convolutional Neural Networks, one of the deep learning tools, have attained an impressive outcome in this area. Applications such as identifying objects, faces, bones, handwritten digits, and traffic signs signify the importance of Convolutional Neural Networks in the real world. The effectiveness of Convolutional Neural Networks in image recognition motivates the researchers to extend its applications in the field of agriculture for recognition of plant species, yield management, weed detection, soil, and water management, fruit counting, diseases, and pest detection, evaluating the nutrient status of plants, and much more. The availability of voluminous research works in applying deep learning models in agriculture leads to difficulty in selecting a suitable model according to the type of dataset and experimental environment. In this manuscript, the authors present a survey of the existing literature in applying deep Convolutional Neural Networks to predict plant diseases from leaf images. This manuscript presents an exemplary comparison of the pre-processing techniques, Convolutional Neural Network models, frameworks, and optimization techniques applied to detect and classify plant diseases using leaf images as a data set. This manuscript also presents a survey of the datasets and performance metrics used to evaluate the efficacy of models. The manuscript highlights the advantages and disadvantages of different techniques and models proposed in the existing literature. This survey will ease the task of researchers working in the field of applying deep learning techniques for the identification and classification of plant leaf diseases. Full article
Show Figures

Figure 1

Back to TopTop