sensors-logo

Journal Browser

Journal Browser

Special Issue "Networked Sensing for Autonomous Cyber-Physical Systems: Theory and Applications"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (1 May 2020).

Special Issue Editors

Dr. Liang Hu
Website
Guest Editor
School of Computer Science and Electronic Engineering, University of Essex, Colchester, CO4, 3SQ, U.K.
Interests: Bayesian estimation; nonlinear filtering; motion planning; autonoumous vehicles; cyber-physical systems
Prof. Dr. Hui Yu
Website
Guest Editor
School of Creative Technologies, University of Portsmouth, Eldon Building, PO1 2DJ, UK
Interests: artifial intelligence; sensing; machine perception; computational intelligence
Special Issues and Collections in MDPI journals
Prof. Seán McLoone
Website
Guest Editor
School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Ashby Building, BT9 5AH
Interests: manufacturing informatics; soft sensing, predictive maintenance; process monitoring; blind sensor characterisation; computational intelligence techniques
Special Issues and Collections in MDPI journals
Prof. Liming Chen
Website
Guest Editor
School of Computer Science and Informatics, De Montfort University, The Gateway, LE1 9BH Leicester, UK
Interests: artifial intelligence, semantic and knowledge technologies, data/knowledge engineering and management, pervasive computing, intelligent environment, ambient assisted living, smart homes
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Recent advancements in sensors, wireless communication technologies and AI techniques have led to the new engineering paradigm of autonomous cyber–physical systems (auto-CPS), with diverse real-world applications such as auto-driving, assisted living, intelligent manufacturing and smart grids. In auto-CPS, the sensor networks provide all sensing information about the physical world to the cyber counterpart, which is key to achieving autonomous decision-making for auto-CPS. To cope with the above challenges, new sensing techniques have been proposed very recently by researchers from different research areas including artificial intelligence, machine learning, signal processing and data mining. Moreover, some methods have been successfully tested or prototyped in different kinds of auto-CPS. However, the huge amount of data, the high degree of uncertainty of sensor data caused by networked environments and the heterogeneity of sensors pose a significant challenge for future auto-CPS, which necessitates the development of intelligent sensing techniques. 

This Special Issue invites contributions that address: (i) networked sensing technologies and issues; and (ii) techniques of relevance to tackle the challenges above. In particular, submitted papers should clearly show novel contributions and innovative applications covering, but not limited to, any of the following topics on networked sensing in auto-CPS:

  • Big data in sensor networks
  • Data mining and/or analysis in social media/networks
  • Data security and privacy in autonomous CPS
  • Distributed signal processing in networked environments
  • Data fusion with multiple/heterogeneous sensors
  • Data-driven condition monitoring
  • Machine learning for networked sensing in autonomous systems
  • Situation awareness for autonomous systems
  • Sensor technology integration into cyber–physical systems
  • Soft sensing applications for cyber–physical systems

This Special Issue will be an open call but also invites selected papers from the 5th IEEE Smart World Congress 2019 (SWC2019).

Dr. Liang Hu
Prof. Hui Yu
Prof. Seán McLoone
Prof. Liming Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Real-Time Facial Affective Computing on Mobile Devices
Sensors 2020, 20(3), 870; https://doi.org/10.3390/s20030870 - 06 Feb 2020
Abstract
Convolutional Neural Networks (CNNs) have become one of the state-of-the-art methods for various computer vision and pattern recognition tasks including facial affective computing. Although impressive results have been obtained in facial affective computing using CNNs, the computational complexity of CNNs has also increased [...] Read more.
Convolutional Neural Networks (CNNs) have become one of the state-of-the-art methods for various computer vision and pattern recognition tasks including facial affective computing. Although impressive results have been obtained in facial affective computing using CNNs, the computational complexity of CNNs has also increased significantly. This means high performance hardware is typically indispensable. Most existing CNNs are thus not generalizable enough for mobile devices, where the storage, memory and computational power are limited. In this paper, we focus on the design and implementation of CNNs on mobile devices for real-time facial affective computing tasks. We propose a light-weight CNN architecture which well balances the performance and computational complexity. The experimental results show that the proposed architecture achieves high performance while retaining the low computational complexity compared with state-of-the-art methods. We demonstrate the feasibility of a CNN architecture in terms of speed, memory and storage consumption for mobile devices by implementing a real-time facial affective computing application on an actual mobile device. Full article
Show Figures

Figure 1

Open AccessArticle
Employing Shadows for Multi-Person Tracking Based on a Single RGB-D Camera
Sensors 2020, 20(4), 1056; https://doi.org/10.3390/s20041056 - 15 Feb 2020
Abstract
Although there are many algorithms to track people that are walking, existing methods mostly fail to cope with occluded bodies in the setting of multi-person tracking with one camera. In this paper, we propose a method to use people’s shadows as a clue [...] Read more.
Although there are many algorithms to track people that are walking, existing methods mostly fail to cope with occluded bodies in the setting of multi-person tracking with one camera. In this paper, we propose a method to use people’s shadows as a clue to track them instead of treating shadows as mere noise. We introduce a novel method to track multiple people by fusing shadow data from the RGB image with skeleton data, both of which are captured by a single RGB Depth (RGB-D) camera. Skeletal tracking provides the positions of people that can be captured directly, while their shadows are used to track them when they are no longer visible. Our experiments confirm that this method can efficiently handle full occlusions. It thus has substantial value in resolving the occlusion problem in multi-person tracking, even with other kinds of cameras. Full article
Show Figures

Figure 1

Open AccessArticle
Estimating Urban Road GPS Environment Friendliness with Bus Trajectories: A City-Scale Approach
Sensors 2020, 20(6), 1580; https://doi.org/10.3390/s20061580 - 12 Mar 2020
Abstract
GPS is taken as the most prevalent positioning system in practice. However, in urban areas, as the GPS satellite signal could be blocked by buildings, the GPS positioning is not accurate due to multi-path errors. Estimating the negative impact of urban environments on [...] Read more.
GPS is taken as the most prevalent positioning system in practice. However, in urban areas, as the GPS satellite signal could be blocked by buildings, the GPS positioning is not accurate due to multi-path errors. Estimating the negative impact of urban environments on GPS accuracy, that is the GPS environment friendliness (GEF) in this paper, will help to predict the GPS errors in different road segments. It enhances user experiences of location-based services and helps to determine where to deploy auxiliary assistant positioning devices. In this paper, we propose a method of processing and analysing massive historical bus GPS trajectory data to estimate the urban road GEF integrated with the contextual information of roads. First, our approach takes full advantage of the particular feature that bus routes are fixed to improve the performance of map matching. In order to estimate the GEF of all roads fairly and reasonably, the method estimates the GPS positioning error of each bus on the roads that are not covered by its route, by taking POIinformation, tag information of roads, and building layout information into account. Finally, we utilize a weighted estimation strategy to calculate the GEF of each road based on the GPS positioning performance of all buses. Based on one month of GPS trajectory data of 4835 buses within the second ring road in Chengdu, China, we estimate the GEF of 8831 different road segments and verify the rationality of the results by satellite maps, street views, and field tests. Full article
Show Figures

Figure 1

Open AccessArticle
A High-Resolution Open Source Platform for Building Envelope Thermal Performance Assessment Using a Wireless Sensor Network
Sensors 2020, 20(6), 1755; https://doi.org/10.3390/s20061755 - 21 Mar 2020
Abstract
This paper presents an in-situ wireless sensor network (WSN) for building envelope thermal transmission analysis. The WSN is able to track heat flows in various weather conditions in real-time. The developed system focuses on long-term in-situ building material variation analysis, which cannot be [...] Read more.
This paper presents an in-situ wireless sensor network (WSN) for building envelope thermal transmission analysis. The WSN is able to track heat flows in various weather conditions in real-time. The developed system focuses on long-term in-situ building material variation analysis, which cannot be readily achieved using current approaches, especially when the number of measurement hotspots is large. This paper describes the implementation of the proposed system using the heat flow method enabled through an adaptable and low-cost wireless network, validated via a laboratory experiment. Full article
Show Figures

Figure 1

Open AccessArticle
BioMove: Biometric User Identification from Human Kinesiological Movements for Virtual Reality Systems
Sensors 2020, 20(10), 2944; https://doi.org/10.3390/s20102944 - 22 May 2020
Abstract
Virtual reality (VR) has advanced rapidly and is used for many entertainment and business purposes. The need for secure, transparent and non-intrusive identification mechanisms is important to facilitate users’ safe participation and secure experience. People are kinesiologically unique, having individual behavioral and movement [...] Read more.
Virtual reality (VR) has advanced rapidly and is used for many entertainment and business purposes. The need for secure, transparent and non-intrusive identification mechanisms is important to facilitate users’ safe participation and secure experience. People are kinesiologically unique, having individual behavioral and movement characteristics, which can be leveraged and used in security sensitive VR applications to compensate for users’ inability to detect potential observational attackers in the physical world. Additionally, such method of identification using a user’s kinesiological data is valuable in common scenarios where multiple users simultaneously participate in a VR environment. In this paper, we present a user study (n = 15) where our participants performed a series of controlled tasks that require physical movements (such as grabbing, rotating and dropping) that could be decomposed into unique kinesiological patterns while we monitored and captured their hand, head and eye gaze data within the VR environment. We present an analysis of the data and show that these data can be used as a biometric discriminant of high confidence using machine learning classification methods such as kNN or SVM, thereby adding a layer of security in terms of identification or dynamically adapting the VR environment to the users’ preferences. We also performed a whitebox penetration testing with 12 attackers, some of whom were physically similar to the participants. We could obtain an average identification confidence value of 0.98 from the actual participants’ test data after the initial study and also a trained model classification accuracy of 98.6%. Penetration testing indicated all attackers resulted in confidence values of less than 50% (<50%), although physically similar attackers had higher confidence values. These findings can help the design and development of secure VR systems. Full article
Open AccessArticle
Learning Mobile Manipulation through Deep Reinforcement Learning
Sensors 2020, 20(3), 939; https://doi.org/10.3390/s20030939 - 10 Feb 2020
Abstract
Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful [...] Read more.
Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed. It has an efficient framework decoupling visual perception from the deep reinforcement learning control, which enables its generalization from simulation training to real-world testing. Extensive simulation and experiment results show that the proposed mobile manipulation system is able to grasp different types of objects autonomously in various simulation and real-world scenarios, verifying the effectiveness of the proposed mobile manipulation system. Full article
Show Figures

Figure 1

Open AccessArticle
Designing a Streaming Algorithm for Outlier Detection in Data Mining—An Incrementa Approach
Sensors 2020, 20(5), 1261; https://doi.org/10.3390/s20051261 - 26 Feb 2020
Abstract
To design an algorithm for detecting outliers over streaming data has become an important task in many common applications, arising in areas such as fraud detections, network analysis, environment monitoring and so forth. Due to the fact that real-time data may arrive in [...] Read more.
To design an algorithm for detecting outliers over streaming data has become an important task in many common applications, arising in areas such as fraud detections, network analysis, environment monitoring and so forth. Due to the fact that real-time data may arrive in the form of streams rather than batches, properties such as concept drift, temporal context, transiency, and uncertainty need to be considered. In addition, data processing needs to be incremental with limited memory resource, and scalable. These facts create big challenges for existing outlier detection algorithms in terms of their accuracies when they are implemented in an incremental fashion, especially in the streaming environment. To address these problems, we first propose C_KDE_WR, which uses sliding window and kernel function to process the streaming data online, and reports its results demonstrating high throughput on handling real-time streaming data, implemented in a CUDA framework on Graphics Processing Unit (GPU). We also present another algorithm, C_LOF, based on a very popular and effective outlier detection algorithm called Local Outlier Factor (LOF) which unfortunately works only on batched data. Using a novel incremental approach that compensates the drawback of high complexity in LOF, we show how to implement it in a streaming context and to obtain results in a timely manner. Like C_KDE_WR, C_LOF also employs sliding-window and statistical-summary to help making decision based on the data in the current window. It also addresses all those challenges of streaming data as addressed in C_KDE_WR. In addition, we report the comparative evaluation on the accuracy of C_KDE_WR with the state-of-the-art SOD_GPU using Precision, Recall and F-score metrics. Furthermore, a t-test is also performed to demonstrate the significance of the improvement. We further report the testing results of C_LOF on different parameter settings and drew ROC and PR curve with their area under the curve (AUC) and Average Precision (AP) values calculated respectively. Experimental results show that C_LOF can overcome the masquerading problem, which often exists in outlier detection on streaming data. We provide complexity analysis and report experiment results on the accuracy of both C_KDE_WR and C_LOF algorithms in order to evaluate their effectiveness as well as their efficiencies. Full article
Show Figures

Figure 1

Open AccessArticle
Towards a Smart Smoking Cessation App: A 1D-CNN Model Predicting Smoking Events
Sensors 2020, 20(4), 1099; https://doi.org/10.3390/s20041099 - 17 Feb 2020
Abstract
Nicotine consumption is considered a major health problem, where many of those who wish to quit smoking relapse. The problem is that overtime smoking as behaviour is changing into a habit, in which it is connected to internal (e.g., nicotine level, craving) and [...] Read more.
Nicotine consumption is considered a major health problem, where many of those who wish to quit smoking relapse. The problem is that overtime smoking as behaviour is changing into a habit, in which it is connected to internal (e.g., nicotine level, craving) and external (action, time, location) triggers. Smoking cessation apps have proved their efficiency to support smoking who wish to quit smoking. However, still, these applications suffer from several drawbacks, where they are highly relying on the user to initiate the intervention by submitting the factor the causes the urge to smoke. This research describes the creation of a combined Control Theory and deep learning model that can learn the smoker’s daily routine and predict smoking events. The model’s structure combines a Control Theory model of smoking with a 1D-CNN classifier to adapt to individual differences between smokers and predict smoking events based on motion and geolocation values collected using a mobile device. Data were collected from 5 participants in the UK, and analysed and tested on 3 different machine learning model (SVM, Decision tree, and 1D-CNN), 1D-CNN has proved it’s efficiency over the three methods with average overall accuracy 86.6%. The average MSE of forecasting the nicotine level was (0.04) in the weekdays, and (0.03) in the weekends. The model has proved its ability to predict the smoking event accurately when the participant is well engaged with the app. Full article
Show Figures

Figure 1

Open AccessArticle
Autonomous Dam Surveillance Robot System Based on Multi-Sensor Fusion
Sensors 2020, 20(4), 1097; https://doi.org/10.3390/s20041097 - 17 Feb 2020
Abstract
Dams are important engineering facilities in the water conservancy industry. They have many functions, such as flood control, electric power generation, irrigation, water supply, shipping, etc. Therefore, their long-term safety is crucial to operational stability. Because of the complexity of the dam environment, [...] Read more.
Dams are important engineering facilities in the water conservancy industry. They have many functions, such as flood control, electric power generation, irrigation, water supply, shipping, etc. Therefore, their long-term safety is crucial to operational stability. Because of the complexity of the dam environment, robots with various kinds of sensors are a good choice to replace humans to perform a surveillance job. In this paper, an autonomous system design is proposed for dam ground surveillance robots, which includes general solution, electromechanical layout, sensors scheme, and navigation method. A strong and agile skid-steered mobile robot body platform is designed and created, which can be controlled accurately based on an MCU and an onboard IMU. A novel low-cost LiDAR is adopted for odometry estimation. To realize more robust localization results, two Kalman filter loops are used with the robot kinematic model to fuse wheel encoder, IMU, LiDAR odometry, and a low-cost GNSS receiver data. Besides, a recognition network based on YOLO v3 is deployed to realize real-time recognition of cracks and people during surveillance. As a system, by connecting the robot, the cloud server and the users with IOT technology, the proposed solution could be more robust and practical. Full article
Show Figures

Figure 1

Open AccessArticle
Survey of Procedural Methods for Two-Dimensional Texture Generation
Sensors 2020, 20(4), 1135; https://doi.org/10.3390/s20041135 - 19 Feb 2020
Abstract
Textures are the most important element for simulating real-world scenes and providing realistic and immersive sensations in many applications. Procedural textures can simulate a broad variety of surface textures, which is helpful for the design and development of new sensors. Procedural texture generation [...] Read more.
Textures are the most important element for simulating real-world scenes and providing realistic and immersive sensations in many applications. Procedural textures can simulate a broad variety of surface textures, which is helpful for the design and development of new sensors. Procedural texture generation is the process of creating textures using mathematical models. The input to these models can be a set of parameters, random values generated by noise functions, or existing texture images, which may be further processed or combined to generate new textures. Many methods for procedural texture generation have been proposed, but there has been no comprehensive survey or comparison of them yet. In this paper, we present a review of different procedural texture generation methods, according to the characteristics of the generated textures. We divide the different generation methods into two categories: structured texture and unstructured texture generation methods. Example textures are generated using these methods with varying parameter values. Furthermore, we survey post-processing methods based on the filtering and combination of different generation models. We also present a taxonomy of different models, according to the mathematical functions and texture samples they can produce. Finally, a psychophysical experiment is designed to identify the perceptual features of the example textures. Finally, an analysis of the results illustrates the strengths and weaknesses of these methods. Full article
Show Figures

Figure 1

Back to TopTop