Next Issue
Volume 19, December-1
Previous Issue
Volume 19, November-1
sensors-logo

Journal Browser

Journal Browser

Table of Contents

Sensors, Volume 19, Issue 22 (November-2 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) We studied the photophysical behavior of a new luminogen. We confirmed the aggregation-induced [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
A Model-Checking-Based Framework for Analyzing Ambient Assisted Living Solutions
Sensors 2019, 19(22), 5057; https://doi.org/10.3390/s19225057 - 19 Nov 2019
Abstract
Since modern ambient assisted living solutions integrate a multitude of assisted-living functionalities, out of which some are safety critical, it is desirable that these systems are analyzed at their design stage to detect possible errors. To achieve this, one needs suitable architectures that [...] Read more.
Since modern ambient assisted living solutions integrate a multitude of assisted-living functionalities, out of which some are safety critical, it is desirable that these systems are analyzed at their design stage to detect possible errors. To achieve this, one needs suitable architectures that support the seamless design of the integrated assisted-living functions, as well as capabilities for the formal modeling and analysis of the architecture. In this paper, we attempt to address this need, by proposing a generic integrated ambient assisted living system architecture, consisting of sensors, data collection, local and cloud processing schemes, and an intelligent decision support system, which can be easily extended to suit specific architecture categories. Our solution is customizable, therefore, we show three instantiations of the generic model, as simple, intermediate, and complex configurations, respectively, and show how to analyze the first and third categories by model checking. Our approach starts by specifying the architecture, using an architecture description language, in our case, the Architecture Analysis and Design Language, which can also account for the probabilistic behavior of such systems, and captures the possibility of component failure. To enable formal analysis, we describe the semantics of the simple and complex architectures within the framework of timed automata. We show that the simple architecture is amenable to exhaustive model checking by employing the UPPAAL tool, whereas for the complex architecture we resort to statistical model checking for scalability reasons. In this case, we apply the statistical extension of UPPAAL, namely UPPAAL SMC. Our work paves the way for the development of formally assured future ambient assisted living solutions. Full article
(This article belongs to the Special Issue IoT Sensors in E-Health)
Show Figures

Figure 1

Open AccessArticle
Evaluating Probabilistic Traffic Load Effects on Large Bridges Using Long-Term Traffic Monitoring Data
Sensors 2019, 19(22), 5056; https://doi.org/10.3390/s19225056 - 19 Nov 2019
Abstract
With the steadily growing of global transportation market, the traffic load has increased dramatically over the past decades, which may develop into a risk source for existing bridges. The simultaneous presence of heavy trucks that are random in nature governs the serviceability limit [...] Read more.
With the steadily growing of global transportation market, the traffic load has increased dramatically over the past decades, which may develop into a risk source for existing bridges. The simultaneous presence of heavy trucks that are random in nature governs the serviceability limit for large bridges. This study investigated probabilistic traffic load effects on large bridges under actual heavy traffic load. Initially, critical stochastic traffic loading scenarios were simulated based on millions of traffic monitoring data in a highway bridge in China. A methodology of extrapolating maximum traffic load effects was presented based on the level-crossing theory. The effectiveness of the proposed method was demonstrated by probabilistic deflection investigation of a suspension bridge. Influence of traffic density variation and overloading control on the maximum deflection was investigated as recommendations for designers and managers. The numerical results show that the congested traffic mostly governs the critical traffic load effects on large bridges. Traffic growth results in higher maximum deformations and probabilities of failure of the bridge in its lifetime. Since the critical loading scenario contains multi-types of overloaded trucks, an effective overloading control measure has a remarkable influence on the lifetime maximum deflection. The stochastic traffic model and corresponding computational framework is expected to be developed to more types of bridges. Full article
Show Figures

Figure 1

Open AccessArticle
A Flexible Portable Glucose Sensor Based on Hierarchical Arrays of [email protected](OH)2 Nanograss
Sensors 2019, 19(22), 5055; https://doi.org/10.3390/s19225055 - 19 Nov 2019
Abstract
Flexible physiological medical devices have gradually spread to the lives of people, especially the elderly. Here, a flexible integrated sensor based on Au nanoparticle modified copper hydroxide nanograss arrays on flexible carbon fiber cloth ([email protected](OH)2/CFC) is fabricated by a facile electrochemical [...] Read more.
Flexible physiological medical devices have gradually spread to the lives of people, especially the elderly. Here, a flexible integrated sensor based on Au nanoparticle modified copper hydroxide nanograss arrays on flexible carbon fiber cloth ([email protected](OH)2/CFC) is fabricated by a facile electrochemical method. The sensor possesses ultrahigh sensitivity of 7.35 mA mM−1 cm−2 in the linear concentration range of 0.10 to 3.30 mM and an ultralow detection limit down to 26.97 nM. The fantastic sensing properties can be ascribed to the collective effect of the superior electrochemical catalytic activity of nanograss arrays with dramatically enhanced electrochemically active surface area as well as mass transfer ability when modified with Au and intimate contact between the active material ([email protected](OH)2) and current collector (CFC), concurrently supplying good conductivity for electron/ion transport during glucose biosensing. Furthermore, the device also exhibits excellent anti-interference and stability for glucose detection. Owing to the distinguished performances, the novel sensor shows extreme reliability for practical glucose testing in human serum and juice samples. Significantly, these unique properties and the soft structure of silk fabric can provide a promising structure design for a flexible micro-device and a great potential material candidate of electrochemical glucose sensor. Full article
(This article belongs to the Special Issue Advances in Materials and Devices for Wearable Chemical Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Maximum Power Point Tracking of Photovoltaic System Based on Reinforcement Learning
Sensors 2019, 19(22), 5054; https://doi.org/10.3390/s19225054 - 19 Nov 2019
Abstract
The maximum power point tracking (MPPT) technique is often used in photovoltaic (PV) systems to extract the maximum power in various environmental conditions. The perturbation and observation (P&O) method is one of the most well-known MPPT methods; however, it may face problems of [...] Read more.
The maximum power point tracking (MPPT) technique is often used in photovoltaic (PV) systems to extract the maximum power in various environmental conditions. The perturbation and observation (P&O) method is one of the most well-known MPPT methods; however, it may face problems of large oscillations around maximum power point (MPP) or low-tracking efficiency. In this paper, two reinforcement learning-based maximum power point tracking (RL MPPT) methods are proposed by the use of the Q-learning algorithm. One constructs the Q-table and the other adopts the Q-network. These two proposed methods do not require the information of an actual PV module in advance and can track the MPP through offline training in two phases, the learning phase and the tracking phase. From the experimental results, both the reinforcement learning-based Q-table maximum power point tracking (RL-QT MPPT) and the reinforcement learning-based Q-network maximum power point tracking (RL-QN MPPT) methods have smaller ripples and faster tracking speeds when compared with the P&O method. In addition, for these two proposed methods, the RL-QT MPPT method performs with smaller oscillation and the RL-QN MPPT method achieves higher average power. Full article
Show Figures

Figure 1

Open AccessArticle
Experimental Validation of Slip-Forming Using Ultrasonic Sensors
Sensors 2019, 19(22), 5053; https://doi.org/10.3390/s19225053 - 19 Nov 2019
Abstract
Slip-forming in concrete construction enables the continuous placement of concrete using a climbing form, the efficiency of which depends on appropriate slip-up timing. This implies the importance of knowing accurately the development of concrete strength over time, which has been assessed manually to [...] Read more.
Slip-forming in concrete construction enables the continuous placement of concrete using a climbing form, the efficiency of which depends on appropriate slip-up timing. This implies the importance of knowing accurately the development of concrete strength over time, which has been assessed manually to date in construction fields. This paper presents a method for automating the slip-forming process by determining the optimal slip-up time using the in-situ strength of concrete. The strength of concrete is evaluated by a formula relating the strength to the surface wave velocity measured with ultrasonic sensors. Specifically, this study validates the applicability of the slip-form system with ultrasonic sensors for continuously monitoring the hardening of concrete through its application in several construction sites. To this end, a slip-form system with a pair of ultrasonic modules at the bottom of the panel was tested and the time variation of surface wave velocity in the concrete material was monitored during the slip-forming process. The results show that the proposed method can provide the optimal slip-up time of the form to automate the slip-forming process. This approach is expected to apply to other construction technologies that required the continuous monitoring of concrete strength for construction efficiency as well as quality maintenance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Open AccessArticle
A Novel Sub-Bottom Profiler and Signal Processor
Sensors 2019, 19(22), 5052; https://doi.org/10.3390/s19225052 - 19 Nov 2019
Abstract
In this paper, we introduce a novel sub-bottom profiler, making good use of the Mills cross configuration of multibeam sonar and synthetic aperture techniques of the synthetic aperture sonar system. The receiver array is mounted along the ship keel, while the transmitter array [...] Read more.
In this paper, we introduce a novel sub-bottom profiler, making good use of the Mills cross configuration of multibeam sonar and synthetic aperture techniques of the synthetic aperture sonar system. The receiver array is mounted along the ship keel, while the transmitter array is mounted perpendicular to the receiver array. With the synthetic aperture technique, the along-track resolution can be greatly improved. The system often suffers from motion error, which severely degrades the imaging performance. To solve this problem, the imaging algorithm with motion compensation (MC) is proposed. With the presented method, the motion error is first estimated based on overlapped elements between successive pulses. Then, the echo data is processed by using the range migration algorithm based on the phase center approximation (PCA) method, which simultaneously performs the MC with the estimated motion error. In order to validate the proposed sub-bottom profiler and data processing method, some simulations and lake trial results are discussed. The processing results of the real data further indicate that the presented configuration has great potential to find buried objects in seabed sediments. Full article
(This article belongs to the Special Issue Ultrasonic Sensors 2019–2020)
Show Figures

Figure 1

Open AccessArticle
Exploring Inter-Instance Relationships within the Query Set for Robust Image Set Matching
Sensors 2019, 19(22), 5051; https://doi.org/10.3390/s19225051 - 19 Nov 2019
Abstract
Image set matching (ISM) has attracted increasing attention in the field of computer vision and pattern recognition. Some studies attempt to model query and gallery sets under a joint or collaborative representation framework, achieving impressive performance. However, existing models consider only the competition [...] Read more.
Image set matching (ISM) has attracted increasing attention in the field of computer vision and pattern recognition. Some studies attempt to model query and gallery sets under a joint or collaborative representation framework, achieving impressive performance. However, existing models consider only the competition and collaboration among gallery sets, neglecting the inter-instance relationships within the query set which are also regarded as one important clue for ISM. In this paper, inter-instance relationships within the query set are explored for robust image set matching. Specifically, we propose to represent the query set instances jointly via a combined dictionary learned from the gallery sets. To explore the commonality and variations within the query set simultaneously to benefit the matching, both low rank and class-level sparsity constraints are imposed on the representation coefficients. Then, to deal with nonlinear data in real scenarios, the‘kernelized version is also proposed. Moreover, to tackle the gross corruptions mixed in the query set, the proposed model is extended for robust ISM. The optimization problems are solved efficiently by employing singular value thresholding and block soft thresholding operators in an alternating direction manner. Experiments on five public datasets demonstrate the effectiveness of the proposed method, comparing favorably with state-of-the-art methods. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing 2019)
Show Figures

Figure 1

Open AccessArticle
Modeling Indoor Relative Humidity and Wood Moisture Content as a Proxy for Wooden Home Fire Risk
Sensors 2019, 19(22), 5050; https://doi.org/10.3390/s19225050 - 19 Nov 2019
Abstract
Severe wooden home conflagrations have previously been linked to the combination of very dry indoor climate in inhabited buildings during winter time, resulting in rapid fire development and strong winds spreading the fire to neighboring structures. Knowledge about how ambient conditions increase the [...] Read more.
Severe wooden home conflagrations have previously been linked to the combination of very dry indoor climate in inhabited buildings during winter time, resulting in rapid fire development and strong winds spreading the fire to neighboring structures. Knowledge about how ambient conditions increase the fire risk associated with dry indoor conditions is, however, lacking. In the present work, the moisture content of indoor wooden home wall panels was modeled based on ambient temperature and relative humidity recorded at meteorological stations as the climatic boundary conditions. The model comprises an air change rate based on ambient and indoor (22 °C) temperatures, indoor moisture sources and wood panel moisture sorption processes; it was tested on four selected homes in Norway during the winter of 2015/2016. The results were compared to values recorded by indoor relative humidity sensors in the homes, which ranged from naturally ventilated early 1900s homes to a modern home with balanced ventilation. The modeled indoor relative humidity levels during cold weather agreed well with recorded values to within 3% relative humidity (RH) root mean square deviation, and thus provided reliable information about expected wood panel moisture content. This information was used to assess historic single home fire risk represented by an estimated time to flashover during the studied period. Based on the modelling, it can be concluded that three days in Haugesund, Norway, in January 2016 were associated with very high conflagration risk due to dry indoor wooden materials and strong winds. In the future, the presented methodology may possibly be based on weather forecasts to predict increased conflagration risk a few days ahead. This could then enable proactive emergency responses for improved fire disaster risk management. Full article
(This article belongs to the Special Issue Sensor Applications on Built Environment)
Show Figures

Figure 1

Open AccessArticle
Comparison of Three Algorithms for the Retrieval of Land Surface Temperature from Landsat 8 Images
Sensors 2019, 19(22), 5049; https://doi.org/10.3390/s19225049 - 19 Nov 2019
Abstract
The successful launch of the Landsat 8 satellite provides important data for the monitoring of urban heat island effects. Since the Landsat 8 TIRS data has two thermal infrared bands, it is suitable for many algorithms to retrieve the land surface temperature (LST). [...] Read more.
The successful launch of the Landsat 8 satellite provides important data for the monitoring of urban heat island effects. Since the Landsat 8 TIRS data has two thermal infrared bands, it is suitable for many algorithms to retrieve the land surface temperature (LST). However, the selection of algorithms for retrieving the LST, the acquisition of algorithm input parameters, and the verification of the results are problems without obvious solutions. Taking Changchun City as an example, this paper used the mono-window algorithm (MWA), the split window algorithm (SWA), and the single-channel (SC) method to extract the LST from the Landsat 8 image and compared the three algorithms in terms of input parameters, accuracy, and sensitivity. The results show that all three algorithms can achieve good results in retrieving the LST. The SWA is the least sensitive to the error of the input parameters. The MWA and the SC method are sensitive to the error of the input parameters, and compared with the error of the LSE, these two algorithms are more sensitive to the error of atmospheric water vapor content. In addition, the MWA is also very sensitive to the error of the effective mean atmospheric temperature. Full article
(This article belongs to the Section Remote Sensors, Control, and Telemetry)
Show Figures

Figure 1

Open AccessArticle
A Complete Automatic Target Recognition System of Low Altitude, Small RCS and Slow Speed (LSS) Targets Based on Multi-Dimensional Feature Fusion
Sensors 2019, 19(22), 5048; https://doi.org/10.3390/s19225048 - 19 Nov 2019
Abstract
Low altitude, small radar cross-section (RCS), and slow speed (LSS) targets, for example small unmanned aerial vehicles (UAVs), have become increasingly significant. In this paper, we propose a new automatic target recognition (ATR) system and a complete ATR chain based on multi-dimensional features [...] Read more.
Low altitude, small radar cross-section (RCS), and slow speed (LSS) targets, for example small unmanned aerial vehicles (UAVs), have become increasingly significant. In this paper, we propose a new automatic target recognition (ATR) system and a complete ATR chain based on multi-dimensional features and multi-layer classifier system using L-band holographic staring radar. We consider all steps of the processing required to make a classification decision out of the raw radar data, mainly including preprocessing for the raw measured Doppler data including regularization and main frequency alignment, selection, and extraction of effective features in three dimensions of RCS, micro-Doppler, and motion, and multi-layer classifier system design. We design creatively a multi-layer classifier system based on directed acyclic graph. Helicopters, small fixed-wing, and rotary-wing UAVs, as well as birds are considered for classification, and the measured data collected by L-band radar demonstrates the effectiveness of the proposed complete ATR classification system. The results show that the ATR classification system based on multi-dimensional features and k-nearest neighbors (KNN) classifier is the best, compared with support vector machine (SVM) and back propagation (BP) neural networks, providing the capability of correct classification with a probability of around 97.62%. Full article
(This article belongs to the Section Remote Sensors, Control, and Telemetry)
Show Figures

Figure 1

Open AccessArticle
Efficacy of Msplit Estimation in Displacement Analysis
Sensors 2019, 19(22), 5047; https://doi.org/10.3390/s19225047 - 19 Nov 2019
Abstract
Sets of geodetic observations often contain groups of observations that differ from each other in the functional model (or at least in the values of its parameters). Sets of observations obtained at various measurement epochs is a practical example in such a context. [...] Read more.
Sets of geodetic observations often contain groups of observations that differ from each other in the functional model (or at least in the values of its parameters). Sets of observations obtained at various measurement epochs is a practical example in such a context. From the conventional point of view, for example, in the least squares estimation, subsets in question should be separated before the parameter estimation. Another option would be application of Msplit estimation, which is based on a fundamental assumption that each observation is related to several competitive functional models. The optimal assignment of every observation to the respective functional model is automatic during the estimation process. Considering deformation analysis, each observation is assigned to several functional models, each of which is related to one measurement epoch. This paper focuses on the efficacy of the method in detecting point displacements. The research is based on example observation sets and the application of Monte Carlo simulations. The results were compared with the classical deformation analysis, which shows that the Msplit estimation seems to be an interesting alternative for conventional methods. The most promising are results obtained for disordered observation sets where the Msplit estimation reveals its natural advantage over the conventional approach. Full article
Show Figures

Figure 1

Open AccessArticle
Body Dimension Measurements of Qinchuan Cattle with Transfer Learning from LiDAR Sensing
Sensors 2019, 19(22), 5046; https://doi.org/10.3390/s19225046 - 19 Nov 2019
Abstract
For the time-consuming and stressful body measuring task of Qinchuan cattle and farmers, the demand for the automatic measurement of body dimensions has become more and more urgent. It is necessary to explore automatic measurements with deep learning to improve breeding efficiency and [...] Read more.
For the time-consuming and stressful body measuring task of Qinchuan cattle and farmers, the demand for the automatic measurement of body dimensions has become more and more urgent. It is necessary to explore automatic measurements with deep learning to improve breeding efficiency and promote the development of industry. In this paper, a novel approach to measuring the body dimensions of live Qinchuan cattle with on transfer learning is proposed. Deep learning of the Kd-network was trained with classical three-dimensional (3D) point cloud datasets (PCD) of the ShapeNet datasets. After a series of processes of PCD sensed by the light detection and ranging (LiDAR) sensor, the cattle silhouettes could be extracted, which after augmentation could be applied as an input layer to the Kd-network. With the output of a convolutional layer of the trained deep model, the output layer of the deep model could be applied to pre-train the full connection network. The TrAdaBoost algorithm was employed to transfer the pre-trained convolutional layer and full connection of the deep model. To classify and recognize the PCD of the cattle silhouette, the average accuracy rate after training with transfer learning could reach up to 93.6%. On the basis of silhouette extraction, the candidate region of the feature surface shape could be extracted with mean curvature and Gaussian curvature. After the computation of the FPFH (fast point feature histogram) of the surface shape, the center of the feature surface could be recognized and the body dimensions of the cattle could finally be calculated. The experimental results showed that the comprehensive error of body dimensions was close to 2%, which could provide a feasible approach to the non-contact observations of the bodies of large physique livestock without any human intervention. Full article
(This article belongs to the Special Issue Smart Sensing Technologies for Agriculture)
Show Figures

Figure 1

Open AccessArticle
The Repair Strategy for Event Coverage Holes Based on Mobile Robots in Wireless Sensor and Robot Networks
Sensors 2019, 19(22), 5045; https://doi.org/10.3390/s19225045 - 19 Nov 2019
Abstract
In the application of the wireless sensor and robot networks (WSRNs), there is an urgent need to accommodate flexible surveillance tasks in intricate surveillance scenarios. On the condition of flexible surveillance missions and demands, event coverage holes occur in the networks. The conventional [...] Read more.
In the application of the wireless sensor and robot networks (WSRNs), there is an urgent need to accommodate flexible surveillance tasks in intricate surveillance scenarios. On the condition of flexible surveillance missions and demands, event coverage holes occur in the networks. The conventional network repair methods based on the geometric graph theory such as Voronoi diagram method are unable to meet the conditions of flexible surveillance tasks and severe multi-restraint scenarios. Mobile robots show obvious advantages in terms of adaptation capacity and mobility in hazardous and severe scenarios. First, we propose an event coverage hole healing model for multi-constrained scenarios. Then, we propose a joint event coverage hole repair algorithm (JECHR) on the basis of global repair and local repair to apply mobile robots to heal event coverage holes in WSRNs. Different from conventional healing methods, the proposed algorithm can heal event coverage holes efficaciously which are resulted from changing surveillance demands and scenarios. The JECHR algorithm can provide an optimal repair method, which is able to adapt different kinds of severe multi-constrained circumstances. Finally, a large number of repair simulation experiments verify the performance of the JECHR algorithm which can be adapted to a variety of intricate surveillance tasks and application scenarios. Full article
(This article belongs to the Special Issue Robot and Sensor Networks for Environmental Monitoring)
Show Figures

Figure 1

Open AccessArticle
Empirical Analysis of Safe Distance Calculation by the Stereoscopic Capturing and Processing of Images Through the Tailigator System
Sensors 2019, 19(22), 5044; https://doi.org/10.3390/s19225044 - 19 Nov 2019
Abstract
Driver disregard for the minimum safety distance increases the probability of rear-end collisions. In order to contribute to active safety on the road, we propose in this work a low-cost Forward Collision Warning system that captures and processes images. Using cameras located in [...] Read more.
Driver disregard for the minimum safety distance increases the probability of rear-end collisions. In order to contribute to active safety on the road, we propose in this work a low-cost Forward Collision Warning system that captures and processes images. Using cameras located in the rear section of a leading vehicle, this system serves the purpose of discouraging tailgating behavior from the vehicle driving behind. We perform in this paper the pertinent field tests to assess system performance, focusing on the calculated distance from the processing of images and the error margins in a straight line, as well as in a curve. Based on the evaluation results, the current version of the Tailigator can be used at speeds up to 50 km per hour without any restrictions. The measurements showed similar characteristics both on the straight line and in the curve. At close distances, between 3 and 5 m, the values deviated from the real value. At average distances, around 10 to 15 m, the Tailigator achieved the best results. From distances higher than 20 m, the deviations increased steadily with the distance. We contribute to the state of the art with an innovative low-cost system to identify tailgating behavior and raise awareness, which works independently of the rear vehicle’s communication capabilities or equipment. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

Open AccessArticle
Characterizing Word Embeddings for Zero-Shot Sensor-Based Human Activity Recognition
Sensors 2019, 19(22), 5043; https://doi.org/10.3390/s19225043 - 19 Nov 2019
Abstract
In this paper, we address Zero-shot learning for sensor activity recognition using word embeddings. The goal of Zero-shot learning is to estimate an unknown activity class (i.e., an activity that does not exist in a given training dataset) by learning to recognize components [...] Read more.
In this paper, we address Zero-shot learning for sensor activity recognition using word embeddings. The goal of Zero-shot learning is to estimate an unknown activity class (i.e., an activity that does not exist in a given training dataset) by learning to recognize components of activities expressed in semantic vectors. The existing zero-shot methods use mainly 2 kinds of representation as semantic vectors, attribute vector and embedding word vector. However, few zero-shot activity recognition methods based on embedding vector have been studied; especially for sensor-based activity recognition, no such studies exist, to the best of our knowledge. In this paper, we compare and thoroughly evaluate the Zero-shot method with different semantic vectors: (1) attribute vector, (2) embedding vector, and (3) expanded embedding vector and analyze their correlation to performance. Our results indicate that the performance of the three spaces is similar but the use of word embedding leads to a more efficient method, since this type of semantic vector can be generated automatically. Moreover, our suggested method achieved higher accuracy than attribute-vector methods, in cases when there exist similar information in both the given sensor data and in the semantic vector; the results of this study help select suitable classes and sensor data to build a training dataset. Full article
Show Figures

Figure 1

Open AccessArticle
Multifunctional Medical Recovery and Monitoring System for the Human Lower Limbs
Sensors 2019, 19(22), 5042; https://doi.org/10.3390/s19225042 - 19 Nov 2019
Abstract
In order to develop multifunctional medical recovery and monitoring equipment for the human lower limb, a new original mechanical structure with three degrees mobility has been created for the leg sagittal model. This mechanism is integrated in the equipment and includes elements that [...] Read more.
In order to develop multifunctional medical recovery and monitoring equipment for the human lower limb, a new original mechanical structure with three degrees mobility has been created for the leg sagittal model. This mechanism is integrated in the equipment and includes elements that have similar functions to the different anatomic parts (femur, median part), leg, and foot. The independent relative rotation motion between the previously mentioned anatomic parts is ensured. The femur may have an oscillation rotation of about 100° relative to the trunk. The median part (leg) alternatively rotates 150° relative to the superior segment. The lower part (foot) is initially placed at 90° relative to the median part and may have an alternative rotation of 25°. Depending on a patient’s medical needs and their recovery progress, device sensors provide varying angular amplitude of different segments of the human limb. Moreover, the mechanism may actuate either anatomic leg segment, two parts, or all of them. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Open AccessArticle
Crowdsourced Security Reconstitution for Wireless Sensor Networks: Secrecy Amplification
Sensors 2019, 19(22), 5041; https://doi.org/10.3390/s19225041 - 19 Nov 2019
Abstract
Research in the area of security for Wireless Sensor Networks over the past two decades has yielded many interesting findings. We focus on the topic of (re-)securing link keys between sensor nodes through so-called secrecy amplification (SA) protocols. Crowdsourcing is at the very [...] Read more.
Research in the area of security for Wireless Sensor Networks over the past two decades has yielded many interesting findings. We focus on the topic of (re-)securing link keys between sensor nodes through so-called secrecy amplification (SA) protocols. Crowdsourcing is at the very heart of these SA protocols. Not only do SA protocols work wonders even for low-level constrained nodes with no tamper resistance, they exhibit astonishing performance in networks under significant attacker control. Our work shows that even when 50% of all network links are compromised, SA protocols can re-secure over 90% of the link keys through an intriguingly simple crowdsourcing mechanism. These protocols allow us to re-take control without any broadly coordinated cooperation, without knowledge of the compromised links, with only very limited knowledge of each particular network node and independently of decisions made by other nodes. Our article first outlines the principles of and presents existing approaches to SA, introducing most of the important related concepts, then presents novel conclusive results for a realistic attacker model parametrised by attacker behaviour and capabilities. We undertook this work using two very different simulators, and we present here the results of analyses and detailed comparisons that have not previously been available. Finally, we report the first real, non-simulated network test results for the most attractive SA protocol, our implementations of which are available as open-source code for two platforms: Arduino and TinyOS. This work demonstrates the practical usability (and the attractive performance) of SA, serving as a ripe technology enabler for (among others) networks with many potentially compromised low-level devices. Full article
Show Figures

Figure 1

Open AccessArticle
Technology Support for Collaborative Preparation of Emergency Plans
Sensors 2019, 19(22), 5040; https://doi.org/10.3390/s19225040 - 19 Nov 2019
Abstract
Preparing a plan for reaction to a grave emergency is a significant first stage in disaster management. A group of experts can do such preparation. Best results are obtained with group members having diverse backgrounds and access to different relevant data. The output [...] Read more.
Preparing a plan for reaction to a grave emergency is a significant first stage in disaster management. A group of experts can do such preparation. Best results are obtained with group members having diverse backgrounds and access to different relevant data. The output of this stage should be a plan as comprehensive as possible, taking into account various perspectives. The group can organize itself as a collaborative decision-making team with a process cycle involving modeling the process, defining the objectives of the decision outcome, gathering data, generating options and evaluating them according to the defined objectives. The meeting participants may have their own evidences concerning people’s location at the beginning of the emergency and assumptions about people’s reactions once it occurs. Geographical information is typically crucial for the plan, because the plan must be based on the location of the safe areas, the distances to move people, the connecting roads or other evacuation links, the ease of movement of the rescue personnel, and other geography-based considerations. The paper deals with this scenario and it introduces a computer tool intended to support the experts to prepare the plan by incorporating the various viewpoints and data. The group participants should be able to generate, visualize and compare the outcomes of their contributions. The proposal is complemented with an example of use: it is a real case simulation in the event of a tsunami following an earthquake at a certain urban location. Full article
Show Figures

Figure 1

Open AccessArticle
Auction-Based Secondary Relay Selection on Overlay Spectrum Sharing in Hybrid Satellite–Terrestrial Sensor Networks
Sensors 2019, 19(22), 5039; https://doi.org/10.3390/s19225039 - 19 Nov 2019
Abstract
In this paper, we investigate the auction-based secondary relay selection on overlay spectrum sharing in hybrid satellite–terrestrial sensor networks (HSTSNs), where both the decode-and-forward (DF) and amplify-and-forward (AF) relay protocols are analyzed based on time division multiple access (TDMA). As both the primary [...] Read more.
In this paper, we investigate the auction-based secondary relay selection on overlay spectrum sharing in hybrid satellite–terrestrial sensor networks (HSTSNs), where both the decode-and-forward (DF) and amplify-and-forward (AF) relay protocols are analyzed based on time division multiple access (TDMA). As both the primary and secondary networks are rational, honest but with incomplete network information, they prefer to obtain maximum possibility payoffs by the cooperation between the primary and secondary networks, and the competition among secondary networks. Hence, Vickery auction is introduced to achieve the effective and efficient secondary relay selection by distinct sub-time slot allocation for one shot in terms of a distributed manner. Finally, numerical simulations are provided to validate the effectiveness of the auction mechanism on cooperative spectrum sharing in HSTSNs for secondary relay selection. Besides, the effect of key factors on the performance of the auction mechanism are analyzed in details. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Open AccessArticle
Compact Inner-Wall Grating Slot Microring Resonator for Label-Free Sensing
Sensors 2019, 19(22), 5038; https://doi.org/10.3390/s19225038 - 19 Nov 2019
Abstract
In this paper, we present and analyze a compact inner-wall grating slot microring resonator (IG-SMRR) with the footprint of less than 13 μm × 13 μm on the silicon-on-insulator (SOI) platform for label-free sensing, which comprises a slot microring resonator (SMRR) and inner-wall [...] Read more.
In this paper, we present and analyze a compact inner-wall grating slot microring resonator (IG-SMRR) with the footprint of less than 13 μm × 13 μm on the silicon-on-insulator (SOI) platform for label-free sensing, which comprises a slot microring resonator (SMRR) and inner-wall grating (IG). Its detection range is significantly enhanced without the limitation of the free spectral region (FSR) owing to the combination of SMRR and IG. The IG-SMRR has an ultra-large quasi-FSR of 84.5 nm as the detection range, and enlarged factor is up to over 3 compared with the conventional SMRR. The concentration sensitivities of sodium chloride solutions and D-glucose solutions are 996.91 pm/% and 968.05 pm/%, respectively, and the corresponding refractive index (RI) sensitivities are 559.5 nm/RIU (refractive index unit) and 558.3 nm/RIU, respectively. The investigation on the combination of SMRR and IG is a valuable exploration of label-free sensing application for ultra-large detection range and ultra-high sensitivity in future. Full article
(This article belongs to the Special Issue Optical–Resonant Microsensors)
Show Figures

Figure 1

Open AccessArticle
Design and CFD Analysis of the Fluid Dynamic Sampling System of the “MicroMED” Optical Particle Counter
Sensors 2019, 19(22), 5037; https://doi.org/10.3390/s19225037 - 19 Nov 2019
Abstract
MicroMED is an optical particle counter that will be part of the ExoMars 2020 mission. Its goal is to provide the first ever in situ measurements of both size distribution and concentration of airborne Martian dust. The instrument samples Martian air, and it [...] Read more.
MicroMED is an optical particle counter that will be part of the ExoMars 2020 mission. Its goal is to provide the first ever in situ measurements of both size distribution and concentration of airborne Martian dust. The instrument samples Martian air, and it is based on an optical system that illuminates the sucked fluid by means of a collimated laser beam and detects embedded dust particles through their scattered light. By analyzing the scattered light profile, it is possible to obtain information about the dust grain size and speed. To do that, MicroMED’s fluid dynamic design should allow dust grains to cross the laser-illuminated sensing volume. The instrument’s Elegant Breadboard was previously developed and tested, and Computational Fluid Dynamic (CFD) analysis enabled determining its criticalities. The present work describes how the design criticalities were solved by means of a CFD simulation campaign. At the same time, it was possible to experimentally validate the results of the analysis. The updated design was then implemented to MicroMED’s Flight Model. Full article
Show Figures

Figure 1

Open AccessArticle
A Fire Reconnaissance Robot Based on SLAM Position, Thermal Imaging Technologies, and AR Display
Sensors 2019, 19(22), 5036; https://doi.org/10.3390/s19225036 - 18 Nov 2019
Abstract
Due to hot toxic smoke and unknown risks under fire conditions, detection and relevant reconnaissance are significant in avoiding casualties. A fire reconnaissance robot was therefore developed to assist in the problem by offering important fire information to fire fighters. The robot consists [...] Read more.
Due to hot toxic smoke and unknown risks under fire conditions, detection and relevant reconnaissance are significant in avoiding casualties. A fire reconnaissance robot was therefore developed to assist in the problem by offering important fire information to fire fighters. The robot consists of three main systems, a display operating system, video surveillance, and mapping and positioning navigation. Augmented reality (AR) goggle technology with a display operating system was also developed to free fire fighters’ hands, which enables them to focus on rescuing processes and not system operation. Considering smoke disturbance, a thermal imaging video surveillance system was included to extract information from the complicated fire conditions. Meanwhile, a simultaneous localization and mapping (SLAM) technology was adopted to build the map, together with the help of a mapping and positioning navigation system. This can provide a real-time map under the rapidly changing fire conditions to guide the fire fighters to the fire sources or the trapped occupants. Based on our experiments, it was found that all the tested system components work quite well under the fire conditions, while the video surveillance system produces clear images under dense smoke and a high-temperature environment; SLAM shows a high accuracy with an average error of less than 3.43%; the positioning accuracy error is 0.31 m; and the maximum error for the navigation system is 3.48%. The developed fire reconnaissance robot can provide a practically important platform to improve fire rescue efficiency to reduce the fire casualties of fire fighters. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Open AccessArticle
An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning
Sensors 2019, 19(22), 5035; https://doi.org/10.3390/s19225035 - 18 Nov 2019
Abstract
When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems [...] Read more.
When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
A Triple Checked Partial Ambiguity Resolution for GPS/BDS RTK Positioning
Sensors 2019, 19(22), 5034; https://doi.org/10.3390/s19225034 - 18 Nov 2019
Abstract
Reliable and accurate carrier phase ambiguity resolution is the key to high-precision Global Navigation Satellite System (GNSS) positioning and application. With the fast development of modern GNSS, the increased number of satellites and ambiguities makes it hard to fix all ambiguities completely and [...] Read more.
Reliable and accurate carrier phase ambiguity resolution is the key to high-precision Global Navigation Satellite System (GNSS) positioning and application. With the fast development of modern GNSS, the increased number of satellites and ambiguities makes it hard to fix all ambiguities completely and correctly. The partial ambiguity fixing technique, which selects a suitable subset of high-dimensional ambiguities to fix, is beneficial for improving the fixed success rate and reliability of ambiguity resolution. In this contribution, the bootstrapping success rate, bounded fixed-failure ratio test, and the new defined baseline precision defect are used for the selection of the ambiguity subset. Then a model and data dual-driven partial ambiguity resolution method is proposed with the above three checks imposed on it, which is named the Triple Checked Partial Ambiguity Resolution (TC-PAR). The comprehensive performance of TC-PAR compared to the full-fixed LAMBDA method is also analyzed based on several criteria including the fixed rate, the fixed success rate and correct fixed rate of ambiguity as well as the precision defect and RMS of the baseline solution. The results show that TC-PAR could significantly improve the fixed success rate of ambiguity, and it has a comparable baseline precision to the LAMBDA method, both of which are at centimeter level after ambiguities are fixed. Full article
(This article belongs to the Section Remote Sensors, Control, and Telemetry)
Show Figures

Figure 1

Open AccessArticle
Classification and Identification of Industrial Gases Based on Electronic Nose Technology
Sensors 2019, 19(22), 5033; https://doi.org/10.3390/s19225033 - 18 Nov 2019
Abstract
Rapid detection and identification of industrial gases is a challenging problem. They have a complex composition and different specifications. This paper presents a method based on the kernel discriminant analysis (KDA) algorithm to identify industrial gases. The smell prints of four typical industrial [...] Read more.
Rapid detection and identification of industrial gases is a challenging problem. They have a complex composition and different specifications. This paper presents a method based on the kernel discriminant analysis (KDA) algorithm to identify industrial gases. The smell prints of four typical industrial gases were collected by an electronic nose. The extracted features of the collected gases were employed for gas identification using different classification algorithms, including principal component analysis (PCA), linear discriminant analysis (LDA), PCA + LDA, and KDA. In order to obtain better classification results, we reduced the dimensions of the original high-dimensional data, and chose a good classifier. The KDA algorithm provided a high classification accuracy of 100% by selecting the offset of the kernel function c = 10 and the degree of freedom d = 5. It was found that this accuracy was 4.17% higher than the one obtained using PCA. In the case of standard deviation, the KDA algorithm has the highest recognition rate and the least time consumption. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Open AccessArticle
Signal Denoising Method Using AIC–SVD and Its Application to Micro-Vibration in Reaction Wheels
Sensors 2019, 19(22), 5032; https://doi.org/10.3390/s19225032 - 18 Nov 2019
Abstract
To suppress noise in signals, a denoising method called AIC–SVD is proposed on the basis of the singular value decomposition (SVD) and the Akaike information criterion (AIC). First, the Hankel matrix is chosen as the trajectory matrix of the signals, and its optimal [...] Read more.
To suppress noise in signals, a denoising method called AIC–SVD is proposed on the basis of the singular value decomposition (SVD) and the Akaike information criterion (AIC). First, the Hankel matrix is chosen as the trajectory matrix of the signals, and its optimal number of rows and columns is selected according to the maximum energy of the singular values. On the basis of the improved AIC, the valid order of the optimal matrix is determined for the vibration signals mixed with Gaussian white noise and colored noise. Subsequently, the denoised signals are reconstructed by inverse operation of SVD and the averaging method. To verify the effectiveness of AIC–SVD, it is compared with wavelet threshold denoising (WTD) and empirical mode decomposition with Savitzky–Golay filter (EMD–SG). Furthermore, a comprehensive indicator of denoising (CID) is introduced to describe the denoising performance. The results show that the denoising effect of AIC–SVD is significantly better than those of WTD and EMD–SG. On applying AIC–SVD to the micro-vibration signals of reaction wheels, the weak harmonic parameters can be successfully extracted during pre-processing. The proposed method is self-adaptable and robust while avoiding the occurrence of over-denoising. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing III)
Show Figures

Figure 1

Open AccessArticle
A Differential Resonant Voltage Sensor Consisting of Piezo Bimorph and Quartz Crystal Double-Ended Tuning Fork Resonators
Sensors 2019, 19(22), 5031; https://doi.org/10.3390/s19225031 - 18 Nov 2019
Abstract
A differential resonant voltage sensor with frequency output was developed by bonding two quartz crystal double-ended tuning forks (DETFs) on both sides of a piezo bimorph. The applied voltage induced tensile and compression deformation in the upper and bottom layers of the piezo [...] Read more.
A differential resonant voltage sensor with frequency output was developed by bonding two quartz crystal double-ended tuning forks (DETFs) on both sides of a piezo bimorph. The applied voltage induced tensile and compression deformation in the upper and bottom layers of the piezo bimorph, which caused the resonant frequency of the dual DETFs to increase and decrease, respectively. In this case, the differential output of the resonance frequencies of the dual DETFs greatly reduced the effect of temperature drift. In addition, the input resistance of the piezo bimorph reached a few hundred GΩ, which caused almost no influence on the DC voltage under test. The fabricated device showed a linear characteristic over its measurement range of ±700 V with a sensitivity of 0.75 Hz/V, a resolution of 0.007% (0.1 V) and hysteresis of 0.76% of the full range. The quality factor of the DETFs was about 3661 (in air). This novel resonant voltage sensor with its extremely low power consumption is promising for measuring or monitoring DC voltage in various fields. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Open AccessArticle
Automatic Extraction of Structural and Non-Structural Road Edges from Mobile Laser Scanning Data
Sensors 2019, 19(22), 5030; https://doi.org/10.3390/s19225030 - 18 Nov 2019
Abstract
Accurate road information is important for applications involving road maintenance, intelligent transportation, and road network updates. Mobile laser scanning (MLS) can effectively extract road information. However, accurately extracting road edges based on large-scale data for complex road conditions, including both structural and non-structural [...] Read more.
Accurate road information is important for applications involving road maintenance, intelligent transportation, and road network updates. Mobile laser scanning (MLS) can effectively extract road information. However, accurately extracting road edges based on large-scale data for complex road conditions, including both structural and non-structural road types, remains difficult. In this study, a robust method to automatically extract structural and non-structural road edges based on a topological network of laser points between adjacent scan lines and auxiliary surfaces is proposed. The extraction of road and curb points was achieved mainly from the roughness of the extracted surface, without considering traditional thresholds (e.g., height jump, slope, and density). Five large-scale road datasets, containing different types of road curbs and complex road scenes, were used to evaluate the practicality, stability, and validity of the proposed method via qualitative and quantitative analyses. Measured values of the correctness, completeness, and quality of extracted road edges were over 95.5%, 91.7%, and 90.9%, respectively. These results confirm that the proposed method can extract road edges from large-scale MLS datasets without the need for auxiliary information on intensity, image, or geographic data. The proposed method is effective regardless of whether the road width is fixed, the road is regular, and the existence of pedestrians and vehicles. Most importantly, the proposed method provides a valuable solution for road edge extraction that is useful for road authorities when developing intelligent transportation systems, such as those required by self-driving vehicles. Full article
(This article belongs to the Section Remote Sensors, Control, and Telemetry)
Show Figures

Figure 1

Open AccessArticle
Proposal of the Tactile Glove Device
Sensors 2019, 19(22), 5029; https://doi.org/10.3390/s19225029 - 18 Nov 2019
Abstract
This project aims to develop a tactile glove device and a virtual environment inserted in the context of tactile internet. The tactile glove allows a human operator to interact remotely with objects from a 3D environment through tactile feedback or tactile sensation. In [...] Read more.
This project aims to develop a tactile glove device and a virtual environment inserted in the context of tactile internet. The tactile glove allows a human operator to interact remotely with objects from a 3D environment through tactile feedback or tactile sensation. In other words, the human operator is able to feel the contour and texture from virtual objects. Applications such as remote diagnostics, games, remote analysis of materials, and others in which objects could be virtualized can be significantly improved using this kind of device. These gloves have been an essential device in all research on the internet next generation called “Tactile Internet”, in which this project is inserted. Unlike the works presented in the literature, the novelty of this work is related to architecture, and tactile devices developed. They are within the 10 ms round trip latency limits required in a tactile internet environment. Details of hardware and software designs of a tactile glove, as well as the virtual environment, are described. Results and comparative analysis about round trip latency time in the tactile internet environment is developed. Full article
(This article belongs to the Special Issue Wearable Electronics, Smart Textiles and Computing)
Show Figures

Figure 1

Open AccessArticle
Distributed Reliable and Efficient Transmission Task Assignment for WSNs
Sensors 2019, 19(22), 5028; https://doi.org/10.3390/s19225028 - 18 Nov 2019
Abstract
Task assignment is a crucial problem in wireless sensor networks (WSNs) that may affect the completion quality of sensing tasks. From the perspective of global optimization, a transmission-oriented reliable and energy-efficient task allocation (TRETA) is proposed, which is based on a comprehensive multi-level [...] Read more.
Task assignment is a crucial problem in wireless sensor networks (WSNs) that may affect the completion quality of sensing tasks. From the perspective of global optimization, a transmission-oriented reliable and energy-efficient task allocation (TRETA) is proposed, which is based on a comprehensive multi-level view of the network and an evaluation model for transmission in WSNs. To deliver better fault tolerance, TRETA dynamically adjusts in event-driven mode. Aiming to solve the reliable and efficient distributed task allocation problem in WSNs, two distributed task assignments for WSNs based on TRETA are proposed. In the former, the sink assigns reliability to all cluster heads according to the reliability requirements, so the cluster head performs local task allocation according to the assigned phase target reliability constraints. Simulation results show the reduction of the communication cost and latency of task allocation compared to centralized task assignments. Like the latter, the global view is obtained by fetching local views from multiple sink nodes, as well as multiple sinks having a consistent comprehensive view for global optimization. The way to respond to local task allocation requirements without the need to communicate with remote nodes overcomes the disadvantages of centralized task allocation in large-scale sensor networks with significant communication overheads and considerable delay, and has better scalability. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Previous Issue
Back to TopTop