sensors-logo

Journal Browser

Journal Browser

Special Issue "Developing New Methods of Computational Intelligence and Data Mining in Smart Sensors Environment"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (21 April 2021) | Viewed by 6673

Special Issue Editor

Dr. Rafal Scherer
E-Mail Website
Guest Editor
Institute of Computational Intelligence, Częstochowa University Of Technology, Czestochowa, Poland
Interests: machine learning; neural networks; computer vision; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning and computational intelligence methods, especially deep learning, can be used to create smart sensors that can perform testing, classification, or prediction. The whole menagerie of sensors, including inductive proximity sensors, photoelectric retro-reflective sensors, ultrasonic sensors, and others, can be beneficial to all areas—from Industry 4.0, through cars, to smart offices, homes, or hospitals. Synergistic hyperconnectivity brought by the emergence of the IoT will increase the applicability of such intelligent sensors. This Special Issue is addressed to all soft computing methods enabling in-sensor, edge, and similar computing for machine vision, data acquisition, or diagnostics. The methods covered will include deep learning, fuzzy logic, evolutionary methods, and various data mining techniques.

Dr. Rafal Scherer
Guest Editor

Keywords

  • sensor networks
  • smart/intelligent sensors
  • sensor devices
  • sensor technology and application
  • sensing principles
  • Internet of things
  • fuzzy logic
  • data mining
  • data fusion and deep learning in sensor systems

Related Special Issue

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Transfer Learning Approach for Classification of Histopathology Whole Slide Images
Sensors 2021, 21(16), 5361; https://doi.org/10.3390/s21165361 - 09 Aug 2021
Cited by 9 | Viewed by 1145
Abstract
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the [...] Read more.
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the medical industry, including the use of magnetic resonance imaging (MRI) scans, computerized tomography (CT) scans, and electrocardiograms (ECGs) to detect life-threatening diseases, including heart disease, cancer, and brain tumors. However, more advancement in the field of pathology is needed, but the main hurdle causing the slow progress is the shortage of large-labeled datasets of histopathology images to train the models. The Kimia Path24 dataset was particularly created for the classification and retrieval of histopathology images. It contains 23,916 histopathology patches with 24 tissue texture classes. A transfer learning-based framework is proposed and evaluated on two famous DL models, Inception-V3 and VGG-16. To improve the productivity of Inception-V3 and VGG-16, we used their pre-trained weights and concatenated these with an image vector, which is used as input for the training of the same architecture. Experiments show that the proposed innovation improves the accuracy of both famous models. The patch-to-scan accuracy of VGG-16 is improved from 0.65 to 0.77, and for the Inception-V3, it is improved from 0.74 to 0.79. Full article
Show Figures

Graphical abstract

Article
AI-Based Multi Sensor Fusion for Smart Decision Making: A Bi-Functional System for Single Sensor Evaluation in a Classification Task
Sensors 2021, 21(13), 4405; https://doi.org/10.3390/s21134405 - 27 Jun 2021
Viewed by 997
Abstract
Sensor fusion has gained a great deal of attention in recent years. It is used as an application tool in many different fields, especially the semiconductor, automotive, and medical industries. However, this field of research, regardless of the field of application, still presents [...] Read more.
Sensor fusion has gained a great deal of attention in recent years. It is used as an application tool in many different fields, especially the semiconductor, automotive, and medical industries. However, this field of research, regardless of the field of application, still presents different challenges concerning the choice of the sensors to be combined and the fusion architecture to be developed. To decrease application costs and engineering efforts, it is very important to analyze the sensors’ data beforehand once the application target is defined. This pre-analysis is a basic step to establish a working environment with fewer misclassification cases and high safety. One promising approach to do so is to analyze the system using deep neural networks. The disadvantages of this approach are mainly the required huge storage capacity, the big training effort, and that these networks are difficult to interpret. In this paper, we focus on developing a smart and interpretable bi-functional artificial intelligence (AI) system, which has to discriminate the combined data regarding predefined classes. Furthermore, the system can evaluate the single source signals used in the classification task. The evaluation here covers each sensor contribution and robustness. More precisely, we train a smart and interpretable prototype-based neural network, which learns automatically to weight the influence of the sensors for the classification decision. Moreover, the prototype-based classifier is equipped with a reject option to measure classification certainty. To validate our approach’s efficiency, we refer to different industrial sensor fusion applications. Full article
Show Figures

Figure 1

Article
Energy Conservation for Internet of Things Tracking Applications Using Deep Reinforcement Learning
Sensors 2021, 21(9), 3261; https://doi.org/10.3390/s21093261 - 08 May 2021
Cited by 7 | Viewed by 1547
Abstract
The Internet of Things (IoT)-based target tracking system is required for applications such as smart farm, smart factory, and smart city where many sensor devices are jointly connected to collect the moving target positions. Each sensor device continuously runs on battery-operated power, consuming [...] Read more.
The Internet of Things (IoT)-based target tracking system is required for applications such as smart farm, smart factory, and smart city where many sensor devices are jointly connected to collect the moving target positions. Each sensor device continuously runs on battery-operated power, consuming energy while perceiving target information in a particular environment. To reduce sensor device energy consumption in real-time IoT tracking applications, many traditional methods such as clustering, information-driven, and other approaches have previously been utilized to select the best sensor. However, applying machine learning methods, particularly deep reinforcement learning (Deep RL), to address the problem of sensor selection in tracking applications is quite demanding because of the limited sensor node battery lifetime. In this study, we proposed a long short-term memory deep Q-network (DQN)-based Deep RL target tracking model to overcome the problem of energy consumption in IoT target applications. The proposed method is utilized to select the energy-efficient best sensor while tracking the target. The best sensor is defined by the minimum distance function (i.e., derived as the state), which leads to lower energy consumption. The simulation results show favorable features in terms of the best sensor selection and energy consumption. Full article
Show Figures

Figure 1

Article
Fusion of Deep Convolutional Neural Networks for No-Reference Magnetic Resonance Image Quality Assessment
Sensors 2021, 21(4), 1043; https://doi.org/10.3390/s21041043 - 03 Feb 2021
Cited by 8 | Viewed by 1129
Abstract
The quality of magnetic resonance images may influence the diagnosis and subsequent treatment. Therefore, in this paper, a novel no-reference (NR) magnetic resonance image quality assessment (MRIQA) method is proposed. In the approach, deep convolutional neural network architectures are fused and jointly trained [...] Read more.
The quality of magnetic resonance images may influence the diagnosis and subsequent treatment. Therefore, in this paper, a novel no-reference (NR) magnetic resonance image quality assessment (MRIQA) method is proposed. In the approach, deep convolutional neural network architectures are fused and jointly trained to better capture the characteristics of MR images. Then, to improve the quality prediction performance, the support vector machine regression (SVR) technique is employed on the features generated by fused networks. In the paper, several promising network architectures are introduced, investigated, and experimentally compared with state-of-the-art NR-IQA methods on two representative MRIQA benchmark datasets. One of the datasets is introduced in this work. As the experimental validation reveals, the proposed fusion of networks outperforms related approaches in terms of correlation with subjective opinions of a large number of experienced radiologists. Full article
Show Figures

Figure 1

Article
GhoMR: Multi-Receptive Lightweight Residual Modules for Hyperspectral Classification
Sensors 2020, 20(23), 6823; https://doi.org/10.3390/s20236823 - 29 Nov 2020
Cited by 2 | Viewed by 1018
Abstract
In recent years, hyperspectral images (HSIs) have attained considerable attention in computer vision (CV) due to their wide utility in remote sensing. Unlike images with three or lesser channels, HSIs have a large number of spectral bands. Recent works demonstrate the use of [...] Read more.
In recent years, hyperspectral images (HSIs) have attained considerable attention in computer vision (CV) due to their wide utility in remote sensing. Unlike images with three or lesser channels, HSIs have a large number of spectral bands. Recent works demonstrate the use of modern deep learning based CV techniques like convolutional neural networks (CNNs) for analyzing HSI. CNNs have receptive fields (RFs) fueled by learnable weights, which are trained to extract useful features from images. In this work, a novel multi-receptive CNN module called GhoMR is proposed for HSI classification. GhoMR utilizes blocks containing several RFs, extracting features in a residual fashion. Each RF extracts features which are used by other RFs to extract more complex features in a hierarchical manner. However, the higher the number of RFs, the greater the associated weights, thus heavier is the network. Most complex architectures suffer from this shortcoming. To tackle this, the recently found Ghost module is used as the basic building unit. Ghost modules address the feature redundancy in CNNs by extracting only limited features and performing cheap transformations on them, thus reducing the overall parameters in the network. To test the discriminative potential of GhoMR, a simple network called GhoMR-Net is constructed using GhoMR modules, and experiments are performed on three public HSI data sets—Indian Pines, University of Pavia, and Salinas Scene. The classification performance is measured using three metrics—overall accuracy (OA), Kappa coefficient (Kappa), and average accuracy (AA). Comparisons with ten state-of-the-art architectures are shown to demonstrate the effectiveness of the method further. Although lightweight, the proposed GhoMR-Net provides comparable or better performance than other networks. The PyTorch code for this study is made available at the iamarijit/GhoMR GitHub repository. Full article
Show Figures

Figure 1

Back to TopTop