sensors-logo

Journal Browser

Journal Browser

Intelligent Sensors and Machine Learning

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (20 February 2023) | Viewed by 30611

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Marquette University, Milwaukee, WI 53233, USA
Interests: machine learning applied to optimization in multicore processors and datacenters; embedded systems; environment monitoring; IoT security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical & Computer Engineering, Marquette University, 1551 W. Wisconsin Ave., Milwaukee, WI 53233, USA
Interests: Computer vision, Robotic vision and vision for autonomous vehicles, Wireless sensor/camera networks, Vision-based distributed target tracking, Object detection and recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue focuses on the intelligent processing of sensor data via both edge and high-performance computing. With the advent of the Internet of Things, sensor data is generated at a rate of petabytes per day. Given this amount of data, intelligent processing of the data is needed near the sensor using edge computing. Also, because of the advances in high performance computing, large data sets can now be processed for training machine learning algorithms. Specifically, deep learning paradigms enable sophisticated transformation of the sensor data into usable information. This Special Issue invites papers that describe using machine learning techniques to process sensor data via both edge and high-performance computing.

Prof. Dr. Richard J. Povinelli
Dr. Cristinel Ababei
Dr. Henry Medeiros
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Smart Sensors
  • Machine Learning
  • Artificial Neural Networks
  • Deep Learning
  • Signal Processing
  • Edge Computing

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 3857 KiB  
Article
Implementation of a Hybrid Intelligence System Enabling the Effectiveness Assessment of Interaction Channels Use in HMI
by Arkadiusz Gardecki, Joanna Rut, Bartlomiej Klin, Michal Podpora and Ryszard Beniak
Sensors 2023, 23(8), 3826; https://doi.org/10.3390/s23083826 - 08 Apr 2023
Cited by 1 | Viewed by 2290
Abstract
The article presents a novel idea of Interaction Quality Sensor (IQS), introduced in the complete solution of Hybrid INTelligence (HINT) architecture for intelligent control systems. The proposed system is designed to use and prioritize multiple information channels (speech, images, videos) in order to [...] Read more.
The article presents a novel idea of Interaction Quality Sensor (IQS), introduced in the complete solution of Hybrid INTelligence (HINT) architecture for intelligent control systems. The proposed system is designed to use and prioritize multiple information channels (speech, images, videos) in order to optimize the information flow efficiency of interaction in HMI systems. The proposed architecture is implemented and validated in a real-world application of training unskilled workers—new employees (with lower competencies and/or a language barrier). With the help of the HINT system, the man–machine communication information channels are deliberately chosen based on IQS readouts to enable an untrained, inexperienced, foreign employee candidate to become a good worker, while not requiring the presence of either an interpreter or an expert during training. The proposed implementation is in line with the labor market trend, which displays significant fluctuations. The HINT system is designed to activate human resources and support organizations/enterprises in the effective assimilation of employees to the tasks performed on the production assembly line. The market need of solving this noticeable problem was caused by a large migration of employees within (and between) enterprises. The research results presented in the work show significant benefits of the methods used, while supporting multilingualism and optimizing the preselection of information channels. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

18 pages, 3711 KiB  
Article
Application of Neural Network in Predicting H2S from an Acid Gas Removal Unit (AGRU) with Different Compositions of Solvents
by Mohd Hakimi, Madiah Binti Omar and Rosdiazli Ibrahim
Sensors 2023, 23(2), 1020; https://doi.org/10.3390/s23021020 - 16 Jan 2023
Cited by 5 | Viewed by 2147
Abstract
The gas sweetening process removes hydrogen sulfide (H2S) in an acid gas removal unit (AGRU) to meet the gas sales’ specification, known as sweet gas. Monitoring the concentration of H2S in sweet gas is crucial to avoid operational and [...] Read more.
The gas sweetening process removes hydrogen sulfide (H2S) in an acid gas removal unit (AGRU) to meet the gas sales’ specification, known as sweet gas. Monitoring the concentration of H2S in sweet gas is crucial to avoid operational and environmental issues. This study shows the capability of artificial neural networks (ANN) to predict the concentration of H2S in sweet gas. The concentration of N-methyldiethanolamine (MDEA) and Piperazine (PZ), temperature and pressure as inputs, and the concentration of H2S in sweet gas as outputs have been used to create the ANN network. Two distinct backpropagation techniques with various transfer functions and numbers of neurons were used to train the ANN models. Multiple linear regression (MLR) was used to compare the outcomes of the ANN models. The models’ performance was assessed using the mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). The findings demonstrate that ANN trained by the Levenberg–Marquardt technique, equipped with a logistic sigmoid (logsig) transfer function with three neurons achieved the highest R2 (0.966) and the lowest MAE (0.066) and RMSE (0.122) values. The findings suggested that ANN can be a reliable and accurate prediction method in predicting the concentration of H2S in sweet gas. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

20 pages, 1912 KiB  
Article
Comparison of the Usability of Apple M1 Processors for Various Machine Learning Tasks
by David Kasperek, Michal Podpora and Aleksandra Kawala-Sterniuk
Sensors 2022, 22(20), 8005; https://doi.org/10.3390/s22208005 - 20 Oct 2022
Cited by 2 | Viewed by 3436
Abstract
In this paper, the authors have compared all of the currently available Apple MacBook Pro laptops, in terms of their usability for basic machine learning research applications (text-based, vision-based, tabular). The paper presents four tests/benchmarks, comparing four Apple Macbook Pro laptop versions: Intel [...] Read more.
In this paper, the authors have compared all of the currently available Apple MacBook Pro laptops, in terms of their usability for basic machine learning research applications (text-based, vision-based, tabular). The paper presents four tests/benchmarks, comparing four Apple Macbook Pro laptop versions: Intel based (i5) and three Apple based (M1, M1 Pro and M1 Max). A script in the Swift programming language was prepared, whose goal was to conduct the training and evaluation process for four machine learning (ML) models. It used the Create ML framework—Apple’s solution dedicated to ML model creation on macOS devices. The training and evaluation processes were performed three times. While running, the script performed measurements of their performance, including the time results. The results were compared with each other in tables, which allowed to compare and discuss the performance of individual devices and the benefits of the specificity of their hardware architectures. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

19 pages, 26735 KiB  
Article
A Fast Weighted Fuzzy C-Medoids Clustering for Time Series Data Based on P-Splines
by Jiucheng Xu, Qinchen Hou, Kanglin Qu, Yuanhao Sun and Xiangru Meng
Sensors 2022, 22(16), 6163; https://doi.org/10.3390/s22166163 - 17 Aug 2022
Cited by 1 | Viewed by 1392
Abstract
The rapid growth of digital information has produced massive amounts of time series data on rich features and most time series data are noisy and contain some outlier samples, which leads to a decline in the clustering effect. To efficiently discover the hidden [...] Read more.
The rapid growth of digital information has produced massive amounts of time series data on rich features and most time series data are noisy and contain some outlier samples, which leads to a decline in the clustering effect. To efficiently discover the hidden statistical information about the data, a fast weighted fuzzy C-medoids clustering algorithm based on P-splines (PS-WFCMdd) is proposed for time series datasets in this study. Specifically, the P-spline method is used to fit the functional data related to the original time series data, and the obtained smooth-fitting data is used as the input of the clustering algorithm to enhance the ability to process the data set during the clustering process. Then, we define a new weighted method to further avoid the influence of outlier sample points in the weighted fuzzy C-medoids clustering process, to improve the robustness of our algorithm. We propose using the third version of mueen’s algorithm for similarity search (MASS 3) to measure the similarity between time series quickly and accurately, to further improve the clustering efficiency. Our new algorithm is compared with several other time series clustering algorithms, and the performance of the algorithm is evaluated experimentally on different types of time series examples. The experimental results show that our new method can speed up data processing and the comprehensive performance of each clustering evaluation index are relatively good. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

18 pages, 2829 KiB  
Article
Sensor Screening Methodology for Virtually Sensing Transmission Input Loads of a Wind Turbine Using Machine Learning Techniques and Drivetrain Simulations
by Baher Azzam, Ralf Schelenz and Georg Jacobs
Sensors 2022, 22(10), 3659; https://doi.org/10.3390/s22103659 - 11 May 2022
Cited by 6 | Viewed by 1736
Abstract
The ongoing trend of building larger wind turbines (WT) to reach greater economies of scale is contributing to the reduction in cost of wind energy, as well as the increase in WT drivetrain input loads into uncharted territories. The resulting intensification of the [...] Read more.
The ongoing trend of building larger wind turbines (WT) to reach greater economies of scale is contributing to the reduction in cost of wind energy, as well as the increase in WT drivetrain input loads into uncharted territories. The resulting intensification of the load situation within the WT gearbox motivates the need to monitor WT transmission input loads. However, due to the high costs of direct measurement solutions, more economical solutions, such as virtual sensing of transmission input loads using stationary sensors mounted on the gearbox housing or other drivetrain locations, are of interest. As the number, type, and location of sensors needed for a virtual sensing solutions can vary considerably in cost, in this investigation, we aimed to identify optimal sensor locations for virtually sensing WT 6-degree of freedom (6-DOF) transmission input loads. Random forest (RF) models were designed and applied to a dataset containing simulated operational data of a Vestas V52 WT multibody simulation model undergoing simulated wind fields. The dataset contained the 6-DOF transmission input loads and signals from potential sensor locations covering deformations, misalignments, and rotational speeds at various drivetrain locations. The RF models were used to identify the sensor locations with the highest impact on accuracy of virtual load sensing following a known statistical test in order to prioritize and reduce the number of needed input signals. The performance of the models was assessed before and after reducing the number of input signals required. By allowing for a screening of sensors prior to real-world tests, the results demonstrate the high promise of the proposed method for optimizing the cost of future virtual WT transmission load sensors. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

17 pages, 771 KiB  
Article
A Neural Network-Based Model for Predicting Saybolt Color of Petroleum Products
by Nurliana Farhana Salehuddin, Madiah Binti Omar, Rosdiazli Ibrahim and Kishore Bingi
Sensors 2022, 22(7), 2796; https://doi.org/10.3390/s22072796 - 06 Apr 2022
Cited by 17 | Viewed by 2710
Abstract
Saybolt color is a standard measurement scale used to determine the quality of petroleum products and the appropriate refinement process. However, the current color measurement methods are mostly laboratory-based, thereby consuming much time and being costly. Hence, we designed an automated model based [...] Read more.
Saybolt color is a standard measurement scale used to determine the quality of petroleum products and the appropriate refinement process. However, the current color measurement methods are mostly laboratory-based, thereby consuming much time and being costly. Hence, we designed an automated model based on an artificial neural network to predict Saybolt color. The network has been built with five input variables, density, kinematic viscosity, sulfur content, cetane index, and total acid number; and one output, i.e., Saybolt color. Two backpropagation algorithms with different transfer functions and neurons number were tested. Mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) were used to assess the performance of the developed model. Additionally, the results of the ANN model are compared with the multiple linear regression (MLR). The results demonstrate that the ANN with the Levenberg–Marquart algorithm, tangent sigmoid transfer function, and three neurons achieved the highest performance (R2 = 0.995, MAE = 1.000, and RMSE = 1.658) in predicting the Saybolt color. The ANN model appeared to be superior to MLR (R2 = 0.830). Hence, this shows the potential of the ANN model as an effective method with which to predict Saybolt color in real time. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

20 pages, 47182 KiB  
Article
Word Embedding Distribution Propagation Graph Network for Few-Shot Learning
by Chaoran Zhu, Ling Wang and Cheng Han
Sensors 2022, 22(7), 2648; https://doi.org/10.3390/s22072648 - 30 Mar 2022
Viewed by 1800
Abstract
Few-shot learning (FSL) is of great significance to the field of machine learning. The ability to learn and generalize using a small number of samples is an obvious distinction between artificial intelligence and humans. In the FSL domain, most graph neural networks (GNNs) [...] Read more.
Few-shot learning (FSL) is of great significance to the field of machine learning. The ability to learn and generalize using a small number of samples is an obvious distinction between artificial intelligence and humans. In the FSL domain, most graph neural networks (GNNs) focus on transferring labeled sample information to an unlabeled query sample, ignoring the important role of semantic information during the classification process. Our proposed method embeds semantic information of classes into a GNN, creating a word embedding distribution propagation graph network (WPGN) for FSL. We merge the attention mechanism with our backbone network, use the Mahalanobis distance to calculate the similarity of classes, select the Funnel ReLU (FReLU) function as the activation function of the Transform layer, and update the point graph and word embedding distribution graph. In extensive experiments on FSL benchmarks, compared with the baseline model, the accuracy of the WPGN on the 5-way-1/2/5 shot tasks increased by 9.03, 4.56, and 4.15%, respectively. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

15 pages, 18772 KiB  
Article
Few-Shot Object Detection Using Multimodal Sensor Systems of Unmanned Surface Vehicles
by Bowei Hong, Yuandong Zhou, Huacheng Qin, Zhiqiang Wei, Hao Liu and Yongquan Yang
Sensors 2022, 22(4), 1511; https://doi.org/10.3390/s22041511 - 15 Feb 2022
Cited by 2 | Viewed by 2231
Abstract
The object detection algorithm is a key component for the autonomous operation of unmanned surface vehicles (USVs). However, owing to complex marine conditions, it is difficult to obtain large-scale, fully labeled surface object datasets. Shipborne sensors are often susceptible to external interference and [...] Read more.
The object detection algorithm is a key component for the autonomous operation of unmanned surface vehicles (USVs). However, owing to complex marine conditions, it is difficult to obtain large-scale, fully labeled surface object datasets. Shipborne sensors are often susceptible to external interference and have unsatisfying performance, compromising the results of traditional object detection tasks. In this paper, a few-shot surface object detection method is proposed based on multimodal sensor systems for USVs. The multi-modal sensors were used for three-dimensional object detection, and the ability of USVs to detect moving objects was enhanced, realizing metric learning-based few-shot object detection for USVs. Compared with conventional methods, the proposed method enhanced the classification results of few-shot tasks. The proposed approach achieves relatively better performance in three sampled sets of well-known datasets, i.e., 2%, 10%, 5% on average precision (AP) and 28%, 24%, 24% on average orientation similarity (AOS). Therefore, this study can be potentially used for various applications where the number of labeled data is not enough to acquire a compromising result. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

17 pages, 2049 KiB  
Article
Deep Learning versus Spectral Techniques for Frequency Estimation of Single Tones: Reduced Complexity for Software-Defined Radio and IoT Sensor Communications
by Hind R. Almayyali and Zahir M. Hussain
Sensors 2021, 21(8), 2729; https://doi.org/10.3390/s21082729 - 13 Apr 2021
Cited by 8 | Viewed by 2669
Abstract
Despite the increasing role of machine learning in various fields, very few works considered artificial intelligence for frequency estimation (FE). This work presents comprehensive analysis of a deep-learning (DL) approach for frequency estimation of single tones. A DL network with two layers having [...] Read more.
Despite the increasing role of machine learning in various fields, very few works considered artificial intelligence for frequency estimation (FE). This work presents comprehensive analysis of a deep-learning (DL) approach for frequency estimation of single tones. A DL network with two layers having a few nodes can estimate frequency more accurately than well-known classical techniques can. While filling the gap in the existing literature, the study is comprehensive, analyzing errors under different signal-to-noise ratios (SNRs), numbers of nodes, and numbers of input samples under missing SNR information. DL-based FE is not significantly affected by SNR bias or number of nodes. A DL-based approach can properly work using a minimal number of input nodes N at which classical methods fail. DL could use as few as two layers while having two or three nodes for each, with the complexity of O{N} compared with discrete Fourier transform (DFT)-based FE with O{Nlog2 (N)} complexity. Furthermore, less N is required for DL. Therefore, DL can significantly reduce FE complexity, memory cost, and power consumption, which is attractive for resource-limited systems such as some Internet of Things (IoT) sensor applications. Reduced complexity also opens the door for hardware-efficient implementation using short-word-length (SWL) or time-efficient software-defined radio (SDR) communications. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

16 pages, 1574 KiB  
Article
Kernel Probabilistic K-Means Clustering
by Bowen Liu, Ting Zhang, Yujian Li, Zhaoying Liu and Zhilin Zhang
Sensors 2021, 21(5), 1892; https://doi.org/10.3390/s21051892 - 08 Mar 2021
Cited by 15 | Viewed by 2785
Abstract
Kernel fuzzy c-means (KFCM) is a significantly improved version of fuzzy c-means (FCM) for processing linearly inseparable datasets. However, for fuzzification parameter m=1, the problem of KFCM (kernel fuzzy c-means) cannot be solved by Lagrangian optimization. To solve this problem, [...] Read more.
Kernel fuzzy c-means (KFCM) is a significantly improved version of fuzzy c-means (FCM) for processing linearly inseparable datasets. However, for fuzzification parameter m=1, the problem of KFCM (kernel fuzzy c-means) cannot be solved by Lagrangian optimization. To solve this problem, an equivalent model, called kernel probabilistic k-means (KPKM), is proposed here. The novel model relates KFCM to kernel k-means (KKM) in a unified mathematic framework. Moreover, the proposed KPKM can be addressed by the active gradient projection (AGP) method, which is a nonlinear programming technique with constraints of linear equalities and linear inequalities. To accelerate the AGP method, a fast AGP (FAGP) algorithm was designed. The proposed FAGP uses a maximum-step strategy to estimate the step length, and uses an iterative method to update the projection matrix. Experiments demonstrated the effectiveness of the proposed method through a performance comparison of KPKM with KFCM, KKM, FCM and k-means. Experiments showed that the proposed KPKM is able to find nonlinearly separable structures in synthetic datasets. Ten real UCI datasets were used in this study, and KPKM had better clustering performance on at least six datsets. The proposed fast AGP requires less running time than the original AGP, and it reduced running time by 76–95% on real datasets. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

15 pages, 3406 KiB  
Article
Non-Communication Decentralized Multi-Robot Collision Avoidance in Grid Map Workspace with Double Deep Q-Network
by Lin Chen, Yongting Zhao, Huanjun Zhao and Bin Zheng
Sensors 2021, 21(3), 841; https://doi.org/10.3390/s21030841 - 27 Jan 2021
Cited by 10 | Viewed by 2569
Abstract
This paper presents a novel decentralized multi-robot collision avoidance method with deep reinforcement learning, which is not only suitable for the large-scale grid map workspace multi-robot system, but also directly processes Lidar signals instead of communicating between the robots. According to the particularity [...] Read more.
This paper presents a novel decentralized multi-robot collision avoidance method with deep reinforcement learning, which is not only suitable for the large-scale grid map workspace multi-robot system, but also directly processes Lidar signals instead of communicating between the robots. According to the particularity of the workspace, we handcrafted a reward function, which considers both the collision avoidance among the robots and as little as possible change of direction of the robots during driving. Using Double Deep Q-Network (DDQN), the policy was trained in the simulation grid map workspace. By designing experiments, we demonstrated that the learned policy can guide the robot well to effectively travel from the initial position to the goal position in the grid map workspace and to avoid collisions with others while driving. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

10 pages, 857 KiB  
Communication
Iterative Min Cut Clustering Based on Graph Cuts
by Bowen Liu, Zhaoying Liu, Yujian Li, Ting Zhang and Zhilin Zhang
Sensors 2021, 21(2), 474; https://doi.org/10.3390/s21020474 - 11 Jan 2021
Cited by 3 | Viewed by 2920
Abstract
Clustering nonlinearly separable datasets is always an important problem in unsupervised machine learning. Graph cut models provide good clustering results for nonlinearly separable datasets, but solving graph cut models is an NP hard problem. A novel graph-based clustering algorithm is proposed for nonlinearly [...] Read more.
Clustering nonlinearly separable datasets is always an important problem in unsupervised machine learning. Graph cut models provide good clustering results for nonlinearly separable datasets, but solving graph cut models is an NP hard problem. A novel graph-based clustering algorithm is proposed for nonlinearly separable datasets. The proposed method solves the min cut model by iteratively computing only one simple formula. Experimental results on synthetic and benchmark datasets indicate the potential of the proposed method, which is able to cluster nonlinearly separable datasets with less running time. Full article
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)
Show Figures

Figure 1

Back to TopTop