E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Intelligent Sensor Signal in Machine Learning"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 August 2019.

Special Issue Editors

Guest Editor
Prof. Dr. ByoungChul Ko

Dept. of Computer Engineering, Shindang-Dong, Dalseo-Gu, Daegu, Keimyung Univ. 704-701, Korea
Website | E-Mail
Phone: +82-10-3559-4564
Interests: advanced driver assistant system; human detection and tracking (thermal); analysis of remote sensing images; human action recognition; fire and smoke detection; medical image processing
Guest Editor
Dr. Deokwoo Lee

Dept. of Computer Engineering, Shindang-Dong, Dalseo-Gu, Daegu, Keimyung Univ. 704-701, Korea
Website | E-Mail
Interests: image and processing; computer vision; pattern recognition

Special Issue Information

Dear Colleagues.

With the advancement of sensor technology, research has been actively carried out to fuse sensor signals and to extract useful information for various recognition problems based on machine learning. Recently, we have been obtaining signals from various sensors, such as wearable sensors, mobile sensors, cameras, heart rate monitoring devices, EEG head-caps and headbands, ECG sensors, breathing monitors, EMG sensors, and temperature sensors. However, as the sensor signal itself has no meaning, the machine learning algorithm must be combined in order to process the signals and make various decisions. Therefore, the use of machine learning, including deep learning, is appropriate for these challenging tasks.

The purpose of this Special Issue is to take the opportunity to introduce the current developments of intelligent sensor applications and innovative sensor fusion techniques combined with machine learning, including computer vision, pattern recognition, expert systems, deep learning, and so on. In this Special Issue, you are invited to submit contributions of original research, advancement, developments, and experiments pertaining to machine learning combined with sensors. Therefore, this Special Issue welcomes the newly developed methods and ideas combining the data obtained from various sensors in the following fields (but not limited to these fields):

  • Sensor fusion techniques based on machine learning
  • Sensors and big data analysis with machine learning
  • Autonomous vehicle technologies combining sensors and machine learning
  • Wireless sensor networks and communication based on machine learning
  • Deep network structure/learning algorithm for intelligent sensing
  • Autonomous robotics with intelligent sensors and machine learning 
  • Multi-modal/task learning for decision-making and control
  • Decision algorithms for autonomous driving
  • Machine learning and artificial intelligence for traffic/quality of experience management in IoT
  • Fuzzy fusion of sensors, data, and information
  • Machine learning for IoT and sensor research challenges
  • Advanced driver assistant systems (ADAS) based on machine learning
  • State-of-practice, research overview, experience reports, industrial experiments, and case studies in the intelligent sensors or IoT

Prof. Dr. ByoungChul Ko
Dr. Deokwoo Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Open AccessArticle
Logistic Regression for Machine Learning in Process Tomography
Sensors 2019, 19(15), 3400; https://doi.org/10.3390/s19153400
Received: 28 June 2019 / Revised: 26 July 2019 / Accepted: 1 August 2019 / Published: 2 August 2019
PDF Full-text (9429 KB) | HTML Full-text | XML Full-text
Abstract
The main goal of the research presented in this paper was to develop a refined machine learning algorithm for industrial tomography applications. The article presents algorithms based on logistic regression in relation to image reconstruction using electrical impedance tomography (EIT) and ultrasound transmission [...] Read more.
The main goal of the research presented in this paper was to develop a refined machine learning algorithm for industrial tomography applications. The article presents algorithms based on logistic regression in relation to image reconstruction using electrical impedance tomography (EIT) and ultrasound transmission tomography (UST). The test object was a tank filled with water in which reconstructed objects were placed. For both EIT and UST, a novel approach was used in which each pixel of the output image was reconstructed by a separately trained prediction system. Therefore, it was necessary to use many predictive systems whose number corresponds to the number of pixels of the output image. Thanks to this approach the under-completed problem was changed to an over-completed one. To reduce the number of predictors in logistic regression by removing irrelevant and mutually correlated entries, the elastic net method was used. The developed algorithm that reconstructs images pixel-by-pixel is insensitive to the shape, number and position of the reconstructed objects. In order to assess the quality of mappings obtained thanks to the new algorithm, appropriate metrics were used: compatibility ratio (CR) and relative error (RE). The obtained results enabled the assessment of the usefulness of logistic regression in the reconstruction of EIT and UST images. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Figures

Figure 1

Open AccessArticle
A Combined Offline and Online Algorithm for Real-Time and Long-Term Classification of Sheep Behaviour: Novel Approach for Precision Livestock Farming
Sensors 2019, 19(14), 3201; https://doi.org/10.3390/s19143201
Received: 30 June 2019 / Revised: 16 July 2019 / Accepted: 18 July 2019 / Published: 20 July 2019
PDF Full-text (3667 KB) | HTML Full-text | XML Full-text
Abstract
Real-time and long-term behavioural monitoring systems in precision livestock farming have huge potential to improve welfare and productivity for the better health of farm animals. However, some of the biggest challenges for long-term monitoring systems relate to “concept drift”, which occurs when systems [...] Read more.
Real-time and long-term behavioural monitoring systems in precision livestock farming have huge potential to improve welfare and productivity for the better health of farm animals. However, some of the biggest challenges for long-term monitoring systems relate to “concept drift”, which occurs when systems are presented with challenging new or changing conditions, and/or in scenarios where training data is not accurately reflective of live sensed data. This study presents a combined offline algorithm and online learning algorithm which deals with concept drift and is deemed by the authors as a useful mechanism for long-term in-the-field monitoring systems. The proposed algorithm classifies three relevant sheep behaviours using information from an embedded edge device that includes tri-axial accelerometer and tri-axial gyroscope sensors. The proposed approach is for the first time reported in precision livestock behavior monitoring and demonstrates improvement in classifying relevant behaviour in sheep, in real-time, under dynamically changing conditions. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Figures

Figure 1

Open AccessArticle
The Novel Sensor Network Structure for Classification Processing Based on the Machine Learning Method of the ACGAN
Sensors 2019, 19(14), 3145; https://doi.org/10.3390/s19143145
Received: 16 June 2019 / Revised: 12 July 2019 / Accepted: 16 July 2019 / Published: 17 July 2019
PDF Full-text (4761 KB) | HTML Full-text | XML Full-text
Abstract
To address the problem of unstable training and poor accuracy in image classification algorithms based on generative adversarial networks (GAN), a novel sensor network structure for classification processing using auxiliary classifier generative adversarial networks (ACGAN) is proposed in this paper. Firstly, the real/fake [...] Read more.
To address the problem of unstable training and poor accuracy in image classification algorithms based on generative adversarial networks (GAN), a novel sensor network structure for classification processing using auxiliary classifier generative adversarial networks (ACGAN) is proposed in this paper. Firstly, the real/fake discrimination of sensor samples in the network has been canceled at the output layer of the discriminative network and only the posterior probability estimation of the sample tag is outputted. Secondly, by regarding the real sensor samples as supervised data and the generative sensor samples as labeled fake data, we have reconstructed the loss function of the generator and discriminator by using the real/fake attributes of sensor samples and the cross-entropy loss function of the label. Thirdly, the pooling and caching method has been introduced into the discriminator to enable more effective extraction of the classification features. Finally, feature matching has been added to the discriminative network to ensure the diversity of the generative sensor samples. Experimental results have shown that the proposed algorithm (CP-ACGAN) achieves better classification accuracy on the MNIST dataset, CIFAR10 dataset and CIFAR100 dataset than other solutions. Moreover, when compared with the ACGAN and CNN classification algorithms, which have the same deep network structure as CP-ACGAN, the proposed method continues to achieve better classification effects and stability than other main existing sensor solutions. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Figures

Figure 1

Open AccessArticle
A Cascade Ensemble Learning Model for Human Activity Recognition with Smartphones
Sensors 2019, 19(10), 2307; https://doi.org/10.3390/s19102307
Received: 21 April 2019 / Revised: 9 May 2019 / Accepted: 16 May 2019 / Published: 19 May 2019
PDF Full-text (4045 KB) | HTML Full-text | XML Full-text
Abstract
Human activity recognition (HAR) has gained lots of attention in recent years due to its high demand in different domains. In this paper, a novel HAR system based on a cascade ensemble learning (CELearning) model is proposed. Each layer of the proposed model [...] Read more.
Human activity recognition (HAR) has gained lots of attention in recent years due to its high demand in different domains. In this paper, a novel HAR system based on a cascade ensemble learning (CELearning) model is proposed. Each layer of the proposed model is comprised of Extremely Gradient Boosting Trees (XGBoost), Random Forest, Extremely Randomized Trees (ExtraTrees) and Softmax Regression, and the model goes deeper layer by layer. The initial input vectors sampled from smartphone accelerometer and gyroscope sensor are trained separately by four different classifiers in the first layer, and the probability vectors representing different classes to which each sample belongs are obtained. Both the initial input data and the probability vectors are concatenated together and considered as input to the next layer’s classifiers, and eventually the final prediction is obtained according to the classifiers of the last layer. This system achieved satisfying classification accuracy on two public datasets of HAR based on smartphone accelerometer and gyroscope sensor. The experimental results show that the proposed approach has gained better classification accuracy for HAR compared to existing state-of-the-art methods, and the training process of the model is simple and efficient. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Figures

Figure 1

Open AccessArticle
Contextual Action Cues from Camera Sensor for Multi-Stream Action Recognition
Sensors 2019, 19(6), 1382; https://doi.org/10.3390/s19061382
Received: 25 January 2019 / Revised: 15 March 2019 / Accepted: 15 March 2019 / Published: 20 March 2019
PDF Full-text (2108 KB) | HTML Full-text | XML Full-text
Abstract
In action recognition research, two primary types of information are appearance and motion information that is learned from RGB images through visual sensors. However, depending on the action characteristics, contextual information, such as the existence of specific objects or globally-shared information in the [...] Read more.
In action recognition research, two primary types of information are appearance and motion information that is learned from RGB images through visual sensors. However, depending on the action characteristics, contextual information, such as the existence of specific objects or globally-shared information in the image, becomes vital information to define the action. For example, the existence of the ball is vital information distinguishing “kicking” from “running”. Furthermore, some actions share typical global abstract poses, which can be used as a key to classify actions. Based on these observations, we propose the multi-stream network model, which incorporates spatial, temporal, and contextual cues in the image for action recognition. We experimented on the proposed method using C3D or inflated 3D ConvNet (I3D) as a backbone network, regarding two different action recognition datasets. As a result, we observed overall improvement in accuracy, demonstrating the effectiveness of our proposed method. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Figures

Figure 1

Open AccessArticle
Estimation of Pedestrian Pose Orientation Using Soft Target Training Based on Teacher–Student Framework
Sensors 2019, 19(5), 1147; https://doi.org/10.3390/s19051147
Received: 30 January 2019 / Revised: 26 February 2019 / Accepted: 2 March 2019 / Published: 6 March 2019
PDF Full-text (1733 KB) | HTML Full-text | XML Full-text
Abstract
Semi-supervised learning is known to achieve better generalisation than a model learned solely from labelled data. Therefore, we propose a new method for estimating a pedestrian pose orientation using a soft-target method, which is a type of semi-supervised learning method. Because a convolutional [...] Read more.
Semi-supervised learning is known to achieve better generalisation than a model learned solely from labelled data. Therefore, we propose a new method for estimating a pedestrian pose orientation using a soft-target method, which is a type of semi-supervised learning method. Because a convolutional neural network (CNN) based pose orientation estimation requires large numbers of parameters and operations, we apply the teacher–student algorithm to generate a compressed student model with high accuracy and compactness resembling that of the teacher model by combining a deep network with a random forest. After the teacher model is generated using hard target data, the softened outputs (soft-target data) of the teacher model are used for training the student model. Moreover, the orientation of the pedestrian has specific shape patterns, and a wavelet transform is applied to the input image as a pre-processing step owing to its good spatial frequency localisation property and the ability to preserve both the spatial information and gradient information of an image. For a benchmark dataset considering real driving situations based on a single camera, we used the TUD and KITTI datasets. We applied the proposed algorithm to various driving images in the datasets, and the results indicate that its classification performance with regard to the pose orientation is better than that of other state-of-the-art methods based on a CNN. In addition, the computational speed of the proposed student model is faster than that of other deep CNNs owing to the shorter model structure with a smaller number of parameters. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Figures

Figure 1

Open AccessArticle
A Deep Convolutional Neural Network Inspired by Auditory Perception for Underwater Acoustic Target Recognition
Sensors 2019, 19(5), 1104; https://doi.org/10.3390/s19051104
Received: 31 January 2019 / Revised: 22 February 2019 / Accepted: 27 February 2019 / Published: 4 March 2019
PDF Full-text (8465 KB) | HTML Full-text | XML Full-text
Abstract
Underwater acoustic target recognition (UATR) using ship-radiated noise faces big challenges due to the complex marine environment. In this paper, inspired by neural mechanisms of auditory perception, a new end-to-end deep neural network named auditory perception inspired Deep Convolutional Neural Network (ADCNN) is [...] Read more.
Underwater acoustic target recognition (UATR) using ship-radiated noise faces big challenges due to the complex marine environment. In this paper, inspired by neural mechanisms of auditory perception, a new end-to-end deep neural network named auditory perception inspired Deep Convolutional Neural Network (ADCNN) is proposed for UATR. In the ADCNN model, inspired by the frequency component perception neural mechanism, a bank of multi-scale deep convolution filters are designed to decompose raw time domain signal into signals with different frequency components. Inspired by the plasticity neural mechanism, the parameters of the deep convolution filters are initialized randomly, and the is n learned and optimized for UATR. The n, max-pooling layers and fully connected layers extract features from each decomposed signal. Finally, in fusion layers, features from each decomposed signal are merged and deep feature representations are extracted to classify underwater acoustic targets. The ADCNN model simulates the deep acoustic information processing structure of the auditory system. Experimental results show that the proposed model can decompose, model and classify ship-radiated noise signals efficiently. It achieves a classification accuracy of 81.96%, which is the highest in the contrast experiments. The experimental results show that auditory perception inspired deep learning method has encouraging potential to improve the classification performance of UATR. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Figures

Figure 1

Open AccessArticle
Vision Sensor Based Fuzzy System for Intelligent Vehicles
Sensors 2019, 19(4), 855; https://doi.org/10.3390/s19040855
Received: 14 January 2019 / Revised: 14 February 2019 / Accepted: 16 February 2019 / Published: 19 February 2019
PDF Full-text (5193 KB) | HTML Full-text | XML Full-text
Abstract
Those in the automotive industry and many researchers have become interested in the development of pedestrian protection systems in recent years. In particular, vision-based methods for predicting pedestrian intentions are now being actively studied to improve the performance of pedestrian protection systems. In [...] Read more.
Those in the automotive industry and many researchers have become interested in the development of pedestrian protection systems in recent years. In particular, vision-based methods for predicting pedestrian intentions are now being actively studied to improve the performance of pedestrian protection systems. In this paper, we propose a vision-based system that can detect pedestrians using an on-dash camera in the car, and can then analyze their movements to determine the probability of collision. Information about pedestrians, including position, distance, movement direction, and magnitude are extracted using computer vision technologies and, using this information, a fuzzy rule-based system makes a judgement on the pedestrian’s risk level. To verify the function of the proposed system, we built several test datasets, collected by ourselves, in high-density regions where vehicles and pedestrians mix closely. The true positive rate of the experimental results was about 86%, which shows the validity of the proposed system. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top