Next Issue
Volume 19, September-2
Previous Issue
Volume 19, August-2
sensors-logo

Journal Browser

Journal Browser

Table of Contents

Sensors, Volume 19, Issue 17 (September-1 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Portable laser-induced plasma spectroscopy (LIPS) is recognized as a powerful tool for addressing a [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Soft Sensing of Silicon Content via Bagging Local Semi-Supervised Models
Sensors 2019, 19(17), 3814; https://doi.org/10.3390/s19173814 - 03 Sep 2019
Viewed by 368
Abstract
The silicon content in industrial blast furnaces is difficult to measure directly online. Traditional soft sensors do not efficiently utilize useful information hidden in process variables. In this work, bagging local semi-supervised models (BLSM) for online silicon content prediction are proposed. They integrate [...] Read more.
The silicon content in industrial blast furnaces is difficult to measure directly online. Traditional soft sensors do not efficiently utilize useful information hidden in process variables. In this work, bagging local semi-supervised models (BLSM) for online silicon content prediction are proposed. They integrate the bagging strategy, the just-in-time-learning manner, and the semi-supervised extreme learning machine into a unified soft sensing framework. With the online semi-supervised learning method, the valuable information hidden in unlabeled data can be explored and absorbed into the prediction model. The application results to an industrial blast furnace show that BLSM has better prediction performance compared with other supervised soft sensors. Full article
(This article belongs to the Special Issue Soft Sensors)
Show Figures

Figure 1

Open AccessArticle
Characterization of Simple and Double Yeast Cells Using Dielectrophoretic Force Measurement
Sensors 2019, 19(17), 3813; https://doi.org/10.3390/s19173813 - 03 Sep 2019
Viewed by 349
Abstract
Dielectrophoretic force is an electric force experienced by particles subjected to non-uniform electric fields. In recent years, plenty of dielectrophoretic force (DEP) applications have been developed. Most of these works have been centered on particle positioning and manipulation. DEP particle characterization has been [...] Read more.
Dielectrophoretic force is an electric force experienced by particles subjected to non-uniform electric fields. In recent years, plenty of dielectrophoretic force (DEP) applications have been developed. Most of these works have been centered on particle positioning and manipulation. DEP particle characterization has been left in the background. Likewise, these characterizations have studied the electric properties of particles from a qualitative point of view. This article focuses on the quantitative measurement of cells’ dielectric force, specifically yeast cells. The measures are obtained as the results of a theoretical model and an instrumental method, both of which are developed and described in the present article, based on a dielectrophoretic chamber made of two V-shaped placed electrodes. In this study, 845 cells were measured. For each one, six speeds were taken at different points in its trajectory. Furthermore, the chamber design is repeatable, and this was the first time that measurements of dielectrophoretic force and cell velocity for double yeast cells were accomplished. To validate the results obtained in the present research, the results have been compared with the dielectric properties of yeast cells collected in the pre-existing literature. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Open AccessArticle
SINS/CNS/GNSS Integrated Navigation Based on an Improved Federated Sage–Husa Adaptive Filter
Sensors 2019, 19(17), 3812; https://doi.org/10.3390/s19173812 - 03 Sep 2019
Viewed by 349
Abstract
Among the methods of the multi-source navigation filter, as a distributed method, the federated filter has a small calculation amount with Gaussian state noise, and it is easy to achieve global optimization. However, when the state noise is time-varying or its initial estimation [...] Read more.
Among the methods of the multi-source navigation filter, as a distributed method, the federated filter has a small calculation amount with Gaussian state noise, and it is easy to achieve global optimization. However, when the state noise is time-varying or its initial estimation is not accurate, there will be a big difference with the true value in the result of the federated filter. For the systems with time-varying noise, adaptive filter is widely used for its remarkable advantages. Therefore, this paper proposes a federated Sage–Husa adaptive filter for multi-source navigation systems with time-varying or mis-estimated state noise. Because both the federated and the adaptive principles are different in updating the covariance of the state noise, it is required to weight the two updating methods to obtain a combined method with stability and adaptability. In addition, according to the characteristics of the system, the weighting coefficient is formed by the exponential function. This federated adaptive filter is applied to the SINS/CNS/GNSS integrated navigation, and the simulation results show that this method is effective. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Open AccessArticle
On the Application of Laser Vibrometry to Perform Structural Health Monitoring in Non-Stationary Conditions of a Hydropower Dam
Sensors 2019, 19(17), 3811; https://doi.org/10.3390/s19173811 - 03 Sep 2019
Viewed by 340
Abstract
This paper presents the first application of the Laser Doppler Vibrometer (LDV) in non-stationary conditions within a hydropower plant powerhouse. The aim of this research is to develop a methodology to include non-contact vibration monitoring as part of structural health monitoring of concrete [...] Read more.
This paper presents the first application of the Laser Doppler Vibrometer (LDV) in non-stationary conditions within a hydropower plant powerhouse. The aim of this research is to develop a methodology to include non-contact vibration monitoring as part of structural health monitoring of concrete dams. We have performed in-situ structural vibration measurements on the run-of-the-river Brežice dam in Slovenia during the start-up tests and regular operation. In recent decades, the rapid development of laser measurement technology has provided powerful methods for a variety of measuring tasks. Despite these recent developments, the use of lasers for measuring has been limited to sites provided with stationary conditions. This paper explains the elimination of pseudo-vibration and measurement noise inherent in the non-stationary conditions of the site. Upon removal of the noise, fatigue of the different structural elements of the powerhouse could be identified if significant changes over time are observed in the eigenfrequencies. The use of laser technology is to complement the regular monitoring activities on large dams, since observation and analysis of integrity parameters provide indispensable information for decision making and maintaining good structural health of ageing dams. Full article
(This article belongs to the Special Issue Sensors for Structural Health Monitoring and Condition Monitoring)
Show Figures

Figure 1

Open AccessArticle
Localization of Two Sound Sources Based on Compressed Matched Field Processing with a Short Hydrophone Array in the Deep Ocean
Sensors 2019, 19(17), 3810; https://doi.org/10.3390/s19173810 - 03 Sep 2019
Viewed by 315
Abstract
Passive multiple sound source localization is a challenging problem in underwater acoustics, especially for a short hydrophone array in the deep ocean. Several attempts have been made to solve this problem by applying compressive sensing (CS) techniques. In this study, one greedy algorithm [...] Read more.
Passive multiple sound source localization is a challenging problem in underwater acoustics, especially for a short hydrophone array in the deep ocean. Several attempts have been made to solve this problem by applying compressive sensing (CS) techniques. In this study, one greedy algorithm in CS theory combined with a spatial filter was developed and applied to a two-source localization scenario in the deep ocean. This method facilitates localization by utilizing the greedy algorithm with a spatial filter at several iterative loops. The simulated and experimental data suggest that the proposed method provides a certain localization performance improvement over the use of the Bartlett processor and the greedy algorithm without a spatial filter. Additionally, the effects on the source localization caused by factors such as the array aperture, number of hydrophones or snapshots, and signal-to-noise ratio (SNR) are demonstrated. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Open AccessArticle
Dynamic Residual Dense Network for Image Denoising
Sensors 2019, 19(17), 3809; https://doi.org/10.3390/s19173809 - 03 Sep 2019
Viewed by 348
Abstract
Deep convolutional neural networks have achieved great performance on various image restoration tasks. Specifically, the residual dense network (RDN) has achieved great results on image noise reduction by cascading multiple residual dense blocks (RDBs) to make full use of the hierarchical feature. However, [...] Read more.
Deep convolutional neural networks have achieved great performance on various image restoration tasks. Specifically, the residual dense network (RDN) has achieved great results on image noise reduction by cascading multiple residual dense blocks (RDBs) to make full use of the hierarchical feature. However, the RDN only performs well in denoising on a single noise level, and the computational cost of the RDN increases significantly with the increase in the number of RDBs, and this only slightly improves the effect of denoising. To overcome this, we propose the dynamic residual dense network (DRDN), a dynamic network that can selectively skip some RDBs based on the noise amount of the input image. Moreover, the DRDN allows modifying the denoising strength to manually get the best outputs, which can make the network more effective for real-world denoising. Our proposed DRDN can perform better than the RDN and reduces the computational cost by 40 50 % . Furthermore, we surpass the state-of-the-art CBDNet by 1.34 dB on the real-world noise benchmark. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Open AccessReview
Multi-Sensor Fusion for Activity Recognition—A Survey
Sensors 2019, 19(17), 3808; https://doi.org/10.3390/s19173808 - 03 Sep 2019
Viewed by 433
Abstract
In Ambient Intelligence (AmI), the activity a user is engaged in is an essential part of the context, so its recognition is of paramount importance for applications in areas like sports, medicine, personal safety, and so forth. The concurrent use of multiple sensors [...] Read more.
In Ambient Intelligence (AmI), the activity a user is engaged in is an essential part of the context, so its recognition is of paramount importance for applications in areas like sports, medicine, personal safety, and so forth. The concurrent use of multiple sensors for recognition of human activities in AmI is a good practice because the information missed by one sensor can sometimes be provided by the others and many works have shown an accuracy improvement compared to single sensors. However, there are many different ways of integrating the information of each sensor and almost every author reporting sensor fusion for activity recognition uses a different variant or combination of fusion methods, so the need for clear guidelines and generalizations in sensor data integration seems evident. In this survey we review, following a classification, the many fusion methods for information acquired from sensors that have been proposed in the literature for activity recognition; we examine their relative merits, either as they are reported and sometimes even replicated and a comparison of these methods is made, as well as an assessment of the trends in the area. Full article
(This article belongs to the Special Issue Information Fusion in Sensor Networks)
Show Figures

Figure 1

Open AccessArticle
Noise Suppression for GPR Data Based on SVD of Window-Length-Optimized Hankel Matrix
Sensors 2019, 19(17), 3807; https://doi.org/10.3390/s19173807 - 03 Sep 2019
Viewed by 275
Abstract
Ground-penetrating radar (GPR) is an effective tool for subsurface detection. Due to the influence of the environment and equipment, the echoes of GPR contain significant noise. In order to suppress noise for GPR data, a method based on singular value decomposition (SVD) of [...] Read more.
Ground-penetrating radar (GPR) is an effective tool for subsurface detection. Due to the influence of the environment and equipment, the echoes of GPR contain significant noise. In order to suppress noise for GPR data, a method based on singular value decomposition (SVD) of a window-length-optimized Hankel matrix is proposed in this paper. First, SVD is applied to decompose the Hankel matrix of the original data, and the fourth root of the fourth central moment of singular values is used to optimize the window length of the Hankel matrix. Then, the difference spectrum of singular values is used to construct a threshold, which is used to distinguish between components of effective signals and components of noise. Finally, the Hankel matrix is reconstructed with singular values corresponding to effective signals to suppress noise, and the denoised data are recovered from the reconstructed Hankel matrix. The effectiveness of the proposed method is verified with both synthetic and field measurements. The experimental results show that the proposed method can effectively improve noise removal performance under different detection scenarios. Full article
(This article belongs to the Special Issue Recent Advancements in Radar Imaging and Sensing Technology)
Show Figures

Figure 1

Open AccessReview
Development and Application of Aptamer-Based Surface-Enhanced Raman Spectroscopy Sensors in Quantitative Analysis and Biotherapy
Sensors 2019, 19(17), 3806; https://doi.org/10.3390/s19173806 - 03 Sep 2019
Viewed by 284
Abstract
Surface-enhanced Raman scattering (SERS) is one of the most special and important Raman techniques. An apparent Raman signal can be observed when the target molecules are absorbed onto the surface of the SERS substrates, especially on the “hot spots” of the substrates. Early [...] Read more.
Surface-enhanced Raman scattering (SERS) is one of the most special and important Raman techniques. An apparent Raman signal can be observed when the target molecules are absorbed onto the surface of the SERS substrates, especially on the “hot spots” of the substrates. Early research focused on exploring the highly active SERS substrates and their detection applications in label-free SERS technology. However, it is a great challenge to use these label-free SERS sensors for detecting hydrophobic or non-polar molecules, especially in complex systems or at low concentrations. Therefore, antibodies, aptamers, and antimicrobial peptides have been used to effectively improve the target selectivity and meet the analysis requirements. Among these selective elements, aptamers are easy to use for synthesis and modifications, and their stability, affinity and specificity are extremely good; they have been successfully used in a variety of testing areas. The combination of SERS detection technology and aptamer recognition ability not only improved the selection accuracy of target molecules, but also improved the sensitivity of the analysis. Variations of aptamer-based SERS sensors have been developed and have achieved satisfactory results in the analysis of small molecules, pathogenic microorganism, mycotoxins, tumor marker and other functional molecules, as well as in successful photothermal therapy of tumors. Herein, we present the latest advances of the aptamer-based SERS sensors, as well as the assembling sensing platforms and the strategies for signal amplification. Furthermore, the existing problems and potential trends of the aptamer-based SERS sensors are discussed. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Open AccessArticle
Detecting Moments of Stress from Measurements of Wearable Physiological Sensors
Sensors 2019, 19(17), 3805; https://doi.org/10.3390/s19173805 - 03 Sep 2019
Viewed by 398
Abstract
There is a rich repertoire of methods for stress detection using various physiological signals and algorithms. However, there is still a gap in research efforts moving from laboratory studies to real-world settings. A small number of research has verified when a physiological response [...] Read more.
There is a rich repertoire of methods for stress detection using various physiological signals and algorithms. However, there is still a gap in research efforts moving from laboratory studies to real-world settings. A small number of research has verified when a physiological response is a reaction to an extrinsic stimulus of the participant’s environment in real-world settings. Typically, physiological signals are correlated with the spatial characteristics of the physical environment, supported by video records or interviews. The present research aims to bridge the gap between laboratory settings and real-world field studies by introducing a new algorithm that leverages the capabilities of wearable physiological sensors to detect moments of stress (MOS). We propose a rule-based algorithm based on galvanic skin response and skin temperature, combing empirical findings with expert knowledge to ensure transferability between laboratory settings and real-world field studies. To verify our algorithm, we carried out a laboratory experiment to create a “gold standard” of physiological responses to stressors. We validated the algorithm in real-world field studies using a mixed-method approach by spatially correlating the participant’s perceived stress, geo-located questionnaires, and the corresponding real-world situation from the video. Results show that the algorithm detects MOS with 84% accuracy, showing high correlations between measured (by wearable sensors), reported (by questionnaires and eDiary entries), and recorded (by video) stress events. The urban stressors that were identified in the real-world studies originate from traffic congestion, dangerous driving situations, and crowded areas such as tourist attractions. The presented research can enhance stress detection in real life and may thus foster a better understanding of circumstances that bring about physiological stress in humans. Full article
(This article belongs to the Special Issue Sensors for Affective Computing and Sentiment Analysis)
Show Figures

Figure 1

Open AccessEditorial
Selected Papers from the 9th World Congress on Industrial Process Tomography
Sensors 2019, 19(17), 3804; https://doi.org/10.3390/s19173804 - 03 Sep 2019
Viewed by 294
Abstract
Industrial process tomography (IPT) is a set of multi-dimensional sensor technologies and methods that aim to provide unparalleled internal information on industrial processes used in many sectors [...] Full article
Open AccessArticle
Fabrication and Hypersonic Wind Tunnel Validation of a MEMS Skin Friction Sensor Based on Visual Alignment Technology
Sensors 2019, 19(17), 3803; https://doi.org/10.3390/s19173803 - 03 Sep 2019
Viewed by 312
Abstract
MEMS-based skin friction sensors are used to measure and validate skin friction and its distribution, and their advantages of small volume, high reliability, and low cost make them very important for vehicle design. Aiming at addressing the accuracy problem of skin friction measurements [...] Read more.
MEMS-based skin friction sensors are used to measure and validate skin friction and its distribution, and their advantages of small volume, high reliability, and low cost make them very important for vehicle design. Aiming at addressing the accuracy problem of skin friction measurements induced by existing errors of sensor fabrication and assembly, a novel fabrication technology based on visual alignment is presented. Sensor optimization, precise fabrication of key parts, micro-assembly based on visual alignment, prototype fabrication, static calibration and validation in a hypersonic wind tunnel are implemented. The fabrication and assembly precision of the sensor prototypes achieve the desired effect. The results indicate that the sensor prototypes have the characteristics of fast response, good stability and zero-return; the measurement ranges are 0–100 Pa, the resolution is 0.1 Pa, the repeatability accuracy and linearity are better than 1%, the repeatability accuracy in laminar flow conditions is better than 2% and it is almost 3% in turbulent flow conditions. The deviations between the measured skin friction coefficients and numerical solutions are almost 10% under turbulent flow conditions; whereas the deviations between the measured skin friction coefficients and the analytical values are large (even more than 100%) under laminar flow conditions. The error resources of direct skin friction measurement and their influence rules are systematically analyzed. Full article
(This article belongs to the Special Issue Advances in Flow and Wind Sensors)
Show Figures

Figure 1

Open AccessArticle
Fusion of Enhanced and Synthetic Vision System Images for Runway and Horizon Detection
Sensors 2019, 19(17), 3802; https://doi.org/10.3390/s19173802 - 03 Sep 2019
Viewed by 316
Abstract
Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced [...] Read more.
Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicle Networks, Systems and Applications)
Show Figures

Figure 1

Open AccessArticle
Highly Fluorescent Green Carbon Dots as a Fluorescent Probe for Detecting Mineral Water pH
Sensors 2019, 19(17), 3801; https://doi.org/10.3390/s19173801 - 03 Sep 2019
Viewed by 336
Abstract
In this report, high-brightness green carbon dots were successfully prepared using 3,5-diaminobenzoic acid as the sole precursor and synthesized in one step using a solvothermal strategy. Under the excitation of 365 nm ultraviolet light, the quantum yield of carbon dots is as high [...] Read more.
In this report, high-brightness green carbon dots were successfully prepared using 3,5-diaminobenzoic acid as the sole precursor and synthesized in one step using a solvothermal strategy. Under the excitation of 365 nm ultraviolet light, the quantum yield of carbon dots is as high as 53.8%. Experiments revealed that the carbon dots are highly carbonized and the surface is rich in amino and carboxyl groups. The synthesized carbon dots have good water solubility, and are resistant to ions and temperature. The fluorescence intensity of CDs is sensitive to pH changes and is linearly correlated with the pH in the near-neutral range (pH = 6.0 to 9.0). Our experiments showed that carbon dots were sensitive and accurate fluorescent probes for measuring the pH value of drinking water, which could provide an effective method for measuring the pH value of water in the future. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Graphical abstract

Open AccessArticle
Towards Evaluating Proactive and Reactive Approaches on Reorganizing Human Resources in IoT-Based Smart Hospitals
Sensors 2019, 19(17), 3800; https://doi.org/10.3390/s19173800 - 02 Sep 2019
Viewed by 322
Abstract
Hospitals play an important role on ensuring a proper treatment of human health. One of the problems to be faced is the increasingly overcrowded patients care queues, who end up waiting for longer times without proper treatment to their health problems. The allocation [...] Read more.
Hospitals play an important role on ensuring a proper treatment of human health. One of the problems to be faced is the increasingly overcrowded patients care queues, who end up waiting for longer times without proper treatment to their health problems. The allocation of health professionals in hospital environments is not able to adapt to the demands of patients. There are times when underused rooms have idle professionals, and overused rooms have fewer professionals than necessary. Previous works have not solved this problem since they focus on understanding the evolution of doctor supply and patient demand, as to better adjust one to the other. However, they have not proposed concrete solutions for that regarding techniques for better allocating available human resources. Moreover, elasticity is one of the most important features of cloud computing, referring to the ability to add or remove resources according to the needs of the application or service. Based on this background, we introduce Elastic allocation of human resources in Healthcare environments (ElHealth) an IoT-focused model able to monitor patient usage of hospital rooms and adapt these rooms for patients demand. Using reactive and proactive elasticity approaches, ElHealth identifies when a room will have a demand that exceeds the capacity of care, and proposes actions to move human resources to adapt to patient demand. Our main contribution is the definition of Human Resources IoT-based Elasticity (i.e., an extension of the concept of resource elasticity in Cloud Computing to manage the use of human resources in a healthcare environment, where health professionals are allocated and deallocated according to patient demand). Another contribution is a cost–benefit analysis for the use of reactive and predictive strategies on human resources reorganization. ElHealth was simulated on a hospital environment using data from a Brazilian polyclinic, and obtained promising results, decreasing the waiting time by up to 96.4% and 96.73% in reactive and proactive approaches, respectively. Full article
(This article belongs to the Special Issue Internet of Things in Healthcare Applications)
Show Figures

Figure 1

Open AccessArticle
A Non-Invasive Method Based on Computer Vision for Grapevine Cluster Compactness Assessment Using a Mobile Sensing Platform under Field Conditions
Sensors 2019, 19(17), 3799; https://doi.org/10.3390/s19173799 - 02 Sep 2019
Viewed by 378
Abstract
Grapevine cluster compactness affects grape composition, fungal disease incidence, and wine quality. Thus far, cluster compactness assessment has been based on visual inspection performed by trained evaluators with very scarce application in the wine industry. The goal of this work was to develop [...] Read more.
Grapevine cluster compactness affects grape composition, fungal disease incidence, and wine quality. Thus far, cluster compactness assessment has been based on visual inspection performed by trained evaluators with very scarce application in the wine industry. The goal of this work was to develop a new, non-invasive method based on the combination of computer vision and machine learning technology for cluster compactness assessment under field conditions from on-the-go red, green, blue (RGB) image acquisition. A mobile sensing platform was used to automatically capture RGB images of grapevine canopies and fruiting zones at night using artificial illumination. Likewise, a set of 195 clusters of four red grapevine varieties of three commercial vineyards were photographed during several years one week prior to harvest. After image acquisition, cluster compactness was evaluated by a group of 15 experts in the laboratory following the International Organization of Vine and Wine (OIV) 204 standard as a reference method. The developed algorithm comprises several steps, including an initial, semi-supervised image segmentation, followed by automated cluster detection and automated compactness estimation using a Gaussian process regression model. Calibration (95 clusters were used as a training set and 100 clusters as the test set) and leave-one-out cross-validation models (LOOCV; performed on the whole 195 clusters set) were elaborated. For these, determination coefficient (R2) of 0.68 and a root mean squared error (RMSE) of 0.96 were obtained on the test set between the image-based compactness estimated values and the average of the evaluators’ ratings (in the range from 1–9). Additionally, the leave-one-out cross-validation yielded a R2 of 0.70 and an RMSE of 1.11. The results show that the newly developed computer vision based method could be commercially applied by the wine industry for efficient cluster compactness estimation from RGB on-the-go image acquisition platforms in commercial vineyards. Full article
(This article belongs to the Special Issue Emerging Sensor Technology in Agriculture)
Show Figures

Figure 1

Open AccessArticle
Automatic Indoor Reconstruction from Point Clouds in Multi-room Environments with Curved Walls
Sensors 2019, 19(17), 3798; https://doi.org/10.3390/s19173798 - 02 Sep 2019
Viewed by 290
Abstract
Recent developments in laser scanning systems have inspired substantial interest in indoor modeling. Semantically rich indoor models are required in many fields. Despite the rapid development of 3D indoor reconstruction methods for building interiors from point clouds, the indoor reconstruction of multi-room environments [...] Read more.
Recent developments in laser scanning systems have inspired substantial interest in indoor modeling. Semantically rich indoor models are required in many fields. Despite the rapid development of 3D indoor reconstruction methods for building interiors from point clouds, the indoor reconstruction of multi-room environments with curved walls is still not resolved. This study proposed a novel straight and curved line tracking method followed by a straight line test. Robust parameters are used, and a novel straight line regularization method is achieved using constrained least squares. The method constructs a cell complex with both straight lines and curved lines, and the indoor reconstruction is transformed into a labeling problem that is solved based on a novel Markov Random Field formulation. The optimal labeling is found by minimizing an energy function by applying a minimum graph cut approach. Detailed experiments were conducted, and the results indicate that the proposed method is well suited for 3D indoor modeling in multi-room indoor environments with curved walls. Full article
(This article belongs to the Special Issue LiDAR-Based Creation of Virtual Cities)
Show Figures

Figure 1

Open AccessArticle
A BCI Gaze Sensing Method Using Low Jitter Code Modulated VEP
Sensors 2019, 19(17), 3797; https://doi.org/10.3390/s19173797 - 02 Sep 2019
Viewed by 308
Abstract
Visual evoked potentials (VEPs) are used in clinical applications in ophthalmology, neurology, and extensively in brain–computer interface (BCI) research. Many BCI implementations utilize steady-state VEP (SSVEP) and/or code modulated VEP (c-VEP) as inputs, in tandem with sophisticated methods to improve information transfer rates [...] Read more.
Visual evoked potentials (VEPs) are used in clinical applications in ophthalmology, neurology, and extensively in brain–computer interface (BCI) research. Many BCI implementations utilize steady-state VEP (SSVEP) and/or code modulated VEP (c-VEP) as inputs, in tandem with sophisticated methods to improve information transfer rates (ITR). There is a gap in knowledge regarding the adaptation dynamics and physiological generation mechanisms of the VEP response, and the relation of these factors with BCI performance. A simple, dual pattern display setup was used to evoke VEPs and to test signatures elicited by non-isochronic, non-singular, low jitter stimuli at the rates of 10, 32, 50, and 70 reversals per second (rps). Non-isochronic, low-jitter stimulation elicits quasi-steady-state VEPs (QSS-VEPs) that are utilized for the simultaneous generation of transient VEP and QSS-VEP. QSS-VEP is a special case of c-VEPs, and it is assumed that it shares similar generators of the SSVEPs. Eight subjects were recorded, and the performance of the overall system was analyzed using receiver operating characteristic (ROC) curves, accuracy plots, and ITRs. In summary, QSS-VEPs performed better than transient VEPs (TR-VEP). It was found that in general, 32 rps stimulation had the highest ROC area, accuracy, and ITRs. Moreover, QSS-VEPs were found to lead to higher accuracy by template matching compared to SSVEPs at 32 rps. To investigate the reasons behind this, adaptation dynamics of transient VEPs and QSS-VEPs at all four rates were analyzed and speculated. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Open AccessReview
Precision Agriculture Techniques and Practices: From Considerations to Applications
Sensors 2019, 19(17), 3796; https://doi.org/10.3390/s19173796 - 02 Sep 2019
Viewed by 371
Abstract
Internet of Things (IoT)-based automation of agricultural events can change the agriculture sector from being static and manual to dynamic and smart, leading to enhanced production with reduced human efforts. Precision Agriculture (PA) along with Wireless Sensor Network (WSN) are the main drivers [...] Read more.
Internet of Things (IoT)-based automation of agricultural events can change the agriculture sector from being static and manual to dynamic and smart, leading to enhanced production with reduced human efforts. Precision Agriculture (PA) along with Wireless Sensor Network (WSN) are the main drivers of automation in the agriculture domain. PA uses specific sensors and software to ensure that the crops receive exactly what they need to optimize productivity and sustainability. PA includes retrieving real data about the conditions of soil, crops and weather from the sensors deployed in the fields. High-resolution images of crops are obtained from satellite or air-borne platforms (manned or unmanned), which are further processed to extract information used to provide future decisions. In this paper, a review of near and remote sensor networks in the agriculture domain is presented along with several considerations and challenges. This survey includes wireless communication technologies, sensors, and wireless nodes used to assess the environmental behaviour, the platforms used to obtain spectral images of crops, the common vegetation indices used to analyse spectral images and applications of WSN in agriculture. As a proof of concept, we present a case study showing how WSN-based PA system can be implemented. We propose an IoT-based smart solution for crop health monitoring, which is comprised of two modules. The first module is a wireless sensor network-based system to monitor real-time crop health status. The second module uses a low altitude remote sensing platform to obtain multi-spectral imagery, which is further processed to classify healthy and unhealthy crops. We also highlight the results obtained using a case study and list the challenges and future directions based on our work. Full article
(This article belongs to the Special Issue UAV-Based Applications in the Internet of Things (IoT))
Show Figures

Figure 1

Open AccessArticle
Point-Plane SLAM Using Supposed Planes for Indoor Environments
Sensors 2019, 19(17), 3795; https://doi.org/10.3390/s19173795 - 02 Sep 2019
Viewed by 318
Abstract
Simultaneous localization and mapping (SLAM) is a fundamental problem for various applications. For indoor environments, planes are predominant features that are less affected by measurement noise. In this paper, we propose a novel point-plane SLAM system using RGB-D cameras. First, we extract feature [...] Read more.
Simultaneous localization and mapping (SLAM) is a fundamental problem for various applications. For indoor environments, planes are predominant features that are less affected by measurement noise. In this paper, we propose a novel point-plane SLAM system using RGB-D cameras. First, we extract feature points from RGB images and planes from depth images. Then plane correspondences in the global map can be found using their contours. Considering the limited size of real planes, we exploit constraints of plane edges. In general, a plane edge is an intersecting line of two perpendicular planes. Therefore, instead of line-based constraints, we calculate and generate supposed perpendicular planes from edge lines, resulting in more plane observations and constraints to reduce estimation errors. To exploit the orthogonal structure in indoor environments, we also add structural (parallel or perpendicular) constraints of planes. Finally, we construct a factor graph using all of these features. The cost functions are minimized to estimate camera poses and global map. We test our proposed system on public RGB-D benchmarks, demonstrating its robust and accurate pose estimation results, compared with other state-of-the-art SLAM systems. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
A Bimetallic-Coated, Low Propagation Loss, Photonic Crystal Fiber Based Plasmonic Refractive Index Sensor
Sensors 2019, 19(17), 3794; https://doi.org/10.3390/s19173794 - 01 Sep 2019
Viewed by 539
Abstract
In this paper, a low-loss, spiral lattice photonic crystal fiber (PCF)-based plasmonic biosensor is proposed for its application in detecting various biomolecules (i.e., sugar, protein, DNA, and mRNA) and biochemicals (i.e., serum and urine). Plasmonic material gold (Au) is employed externally to efficiently [...] Read more.
In this paper, a low-loss, spiral lattice photonic crystal fiber (PCF)-based plasmonic biosensor is proposed for its application in detecting various biomolecules (i.e., sugar, protein, DNA, and mRNA) and biochemicals (i.e., serum and urine). Plasmonic material gold (Au) is employed externally to efficiently generate surface plasmon resonance (SPR) in the outer surface of the PCF. A thin layer of titanium oxide (TiO2) is also introduced, which assists in adhering the Au layer to the silica fiber. The sensing performance is investigated using a mode solver based on the finite element method (FEM). Simulation results show a maximum wavelength sensitivity of 23,000 nm/RIU for a bio-samples refractive index (RI) detection range of 1.32–1.40. This sensor also exhibits a very low confinement loss of 0.22 and 2.87 dB/cm for the analyte at 1.32 and 1.40 RI, respectively. Because of the ultra-low propagation loss, the proposed sensor can be fabricated within several centimeters, which reduces the complexity related to splicing, and so on. Full article
(This article belongs to the Special Issue Optical Fiber Biosensors)
Show Figures

Figure 1

Open AccessArticle
A Practical Guide to Source and Receiver Locations for Surface Wave Transmission Measurements across a Surface-Breaking Crack in Plate Structures
Sensors 2019, 19(17), 3793; https://doi.org/10.3390/s19173793 - 01 Sep 2019
Viewed by 362
Abstract
The main objectives of this study are to investigate the interference of multiple bottom reflected waves in the surface wave transmission (SWT) measurements in a plate and to propose a practical guide to source-and-receiver locations to obtain reliable and consistent SWT measurements in [...] Read more.
The main objectives of this study are to investigate the interference of multiple bottom reflected waves in the surface wave transmission (SWT) measurements in a plate and to propose a practical guide to source-and-receiver locations to obtain reliable and consistent SWT measurements in a plate. For these purposes, a series of numerical simulations, such as finite element modelling (FEM), are performed to investigate the variation of transmission coefficient of surface waves across a surface-breaking crack in various source-to-receiver configurations in plates. Main variables in this study include the crack depths (0, 10, 20, 30, 40 and 50 mm), plate thicknesses (150, 200, 300, 400 and 800 mm), source-to-crack distances (100, 150, 200, 250 and 300 mm) and receiver-to-crack distances. The validity of numerical simulation results was verified by comparison with results from experiments using Plexiglas specimens using two types of noncontact sensors (laser vibrometer and air-coupled sensor) in the laboratory. Based on simulation and experimental results in this study, practical guidelines for sensor-to-receiver locations are proposed to reduce the effects of the interference of bottom reflected waves on the SWT measurements across a surface-breaking crack in a plate. The findings in this study will help obtain reliable and consistent SWT measurements across a surface-breaking crack in plate-like structures. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Open AccessArticle
Focusing Bistatic Forward-Looking Synthetic Aperture Radar Based on an Improved Hyperbolic Range Model and a Modified Omega-K Algorithm
Sensors 2019, 19(17), 3792; https://doi.org/10.3390/s19173792 - 01 Sep 2019
Viewed by 446
Abstract
For parallel bistatic forward-looking synthetic aperture radar (SAR) imaging, the instantaneous slant range is a double-square-root expression due to the separate transmitter-receiver system form. The hyperbolic approximation provides a feasible solution to convert the dual square-root expression into a single-square-root expression. However, some [...] Read more.
For parallel bistatic forward-looking synthetic aperture radar (SAR) imaging, the instantaneous slant range is a double-square-root expression due to the separate transmitter-receiver system form. The hyperbolic approximation provides a feasible solution to convert the dual square-root expression into a single-square-root expression. However, some high-order terms of the range Taylor expansion have not been considered during the slant range approximation procedure in existing methods, and therefore, inaccurate phase compensation occurs. To obtain a more accurate compensation result, an improved hyperbolic approximation range form with high-order terms is proposed. Then, a modified omega-K algorithm based on the new slant range form is adopted for parallel bistatic forward-looking SAR imaging. Several simulation results validate the effectiveness of the proposed imaging algorithm. Full article
(This article belongs to the Special Issue Recent Advancements in Radar Imaging and Sensing Technology)
Show Figures

Figure 1

Open AccessArticle
Neural Activities Classification of Human Inhibitory Control Using Hierarchical Model
Sensors 2019, 19(17), 3791; https://doi.org/10.3390/s19173791 - 01 Sep 2019
Viewed by 365
Abstract
Human inhibitory control refers to the suppression of behavioral response in real environments, such as when driving a car or riding a motorcycle, playing a game and operating a machine. The P300 wave is a neural marker of human inhibitory control, and it [...] Read more.
Human inhibitory control refers to the suppression of behavioral response in real environments, such as when driving a car or riding a motorcycle, playing a game and operating a machine. The P300 wave is a neural marker of human inhibitory control, and it can be used to recognize the symptoms of attention deficit hyperactivity disorder (ADHD) in human. In addition, the P300 neural marker can be considered as a stop command in the brain-computer interface (BCI) technologies. Therefore, the present study of electroencephalography (EEG) recognizes the mindset of human inhibition by observing the brain dynamics, like P300 wave in the frontal lobe, supplementary motor area, and in the right temporoparietal junction of the brain, all of them have been associated with response inhibition. Our work developed a hierarchical classification model to identify the neural activities of human inhibition. To accomplish this goal phase-locking value (PLV) method was used to select coupled brain regions related to inhibition because this method has demonstrated the best performance of the classification system. The PLVs were used with pattern recognition algorithms to classify a successful-stop versus a failed-stop in left-and right-hand inhibitions. The results demonstrate that quadratic discriminant analysis (QDA) yielded an average classification accuracy of 94.44%. These findings implicate the neural activities of human inhibition can be utilized as a stop command in BCI technologies, as well as to identify the symptoms of ADHD patients in clinical research. Full article
(This article belongs to the Special Issue Novel Approaches to EEG Signal Processing)
Show Figures

Figure 1

Open AccessFeature PaperReview
Recent Advances in Stochastic Sensor Control for Multi-Object Tracking
Sensors 2019, 19(17), 3790; https://doi.org/10.3390/s19173790 - 01 Sep 2019
Viewed by 353
Abstract
In many multi-object tracking applications, the sensor(s) may have controllable states. Examples include movable sensors in multi-target tracking applications in defence, and unmanned air vehicles (UAVs) as sensors in multi-object systems used in civil applications such as inspection and fault detection. Uncertainties in [...] Read more.
In many multi-object tracking applications, the sensor(s) may have controllable states. Examples include movable sensors in multi-target tracking applications in defence, and unmanned air vehicles (UAVs) as sensors in multi-object systems used in civil applications such as inspection and fault detection. Uncertainties in the number of objects (due to random appearances and disappearances) as well as false alarms and detection uncertainties collectively make the above problem a highly challenging stochastic sensor control problem. Numerous solutions have been proposed to tackle the problem of precise control of sensor(s) for multi-object detection and tracking, and, in this work, recent contributions towards the advancement in the domain are comprehensively reviewed. After an introduction, we provide an overview of the sensor control problem and present the key components of sensor control solutions in general. Then, we present a categorization of the existing methods and review those methods under each category. The categorization includes a new generation of solutions called selective sensor control that have been recently developed for applications where particular objects of interest need to be accurately detected and tracked by controllable sensors. Full article
(This article belongs to the Special Issue Mobile Sensing: Platforms, Technologies and Challenges)
Show Figures

Figure 1

Open AccessArticle
Energy-Efficient Multi-Disjoint Path Opportunistic Node Connection Routing Protocol in Wireless Sensor Networks for Smart Grids
Sensors 2019, 19(17), 3789; https://doi.org/10.3390/s19173789 - 01 Sep 2019
Viewed by 405
Abstract
The gradual increase in the maturity of sensor electronics has resulted in the increasing demand for wireless sensor networks for many industrial applications. One of the industrial platforms for efficient usage and deployment of sensor networks is smart grids. The critical network traffic [...] Read more.
The gradual increase in the maturity of sensor electronics has resulted in the increasing demand for wireless sensor networks for many industrial applications. One of the industrial platforms for efficient usage and deployment of sensor networks is smart grids. The critical network traffic in smart grids includes both delay-sensitive and delay-tolerant data for real-time and non-real-time usage. To facilitate these traffic requirements, the asynchronous working–sleeping cycle of sensor nodes can be used as an opportunity to create a node connection. Efficient use of wireless sensor network in smart grids depends on various parameters like working–sleeping cycle, energy consumption, network lifetime, routing protocol, and delay constraints. In this paper, we propose an energy-efficient multi-disjoint path opportunistic node connection routing protocol (abbreviated as EMOR) for sensor nodes deployed in neighborhood area network. EMOR utilizes residual energy, availability of sensor node’s buffer size, working–sleeping cycle of the sensor node and link quality factor to calculate optimum path connectivity after opportunistic connection random graph and spanning tree formation. The multi-disjoint path selection in EMOR based on service differentiation of real-time and non-real-time traffic leads to an improvement in packet delivery rate, network lifetime, end-end delay and total energy consumption. Full article
Show Figures

Graphical abstract

Open AccessReview
Software-Defined Network-Based Vehicular Networks: A Position Paper on Their Modeling and Implementation
Sensors 2019, 19(17), 3788; https://doi.org/10.3390/s19173788 - 31 Aug 2019
Viewed by 747
Abstract
There is a strong devotion in the automotive industry to be part of a wider progression towards the Fifth Generation (5G) era. In-vehicle integration costs between cellular and vehicle-to-vehicle networks using Dedicated Short Range Communication could be avoided by adopting Cellular Vehicle-to-Everything (C-V2X) [...] Read more.
There is a strong devotion in the automotive industry to be part of a wider progression towards the Fifth Generation (5G) era. In-vehicle integration costs between cellular and vehicle-to-vehicle networks using Dedicated Short Range Communication could be avoided by adopting Cellular Vehicle-to-Everything (C-V2X) technology with the possibility to re-use the existing mobile network infrastructure. More and more, with the emergence of Software Defined Networks, the flexibility and the programmability of the network have not only impacted the design of new vehicular network architectures but also the implementation of V2X services in future intelligent transportation systems. In this paper, we define the concepts that help evaluate software-defined-based vehicular network systems in the literature based on their modeling and implementation schemes. We first overview the current studies available in the literature on C-V2X technology in support of V2X applications. We then present the different architectures and their underlying system models for LTE-V2X communications. We later describe the key ideas of software-defined networks and their concepts for V2X services. Lastly, we provide a comparative analysis of existing SDN-based vehicular network system grouped according to their modeling and simulation concepts. We provide a discussion and highlight vehicular ad-hoc networks’ challenges handled by SDN-based vehicular networks. Full article
(This article belongs to the Special Issue Vehicular Sensor Networks: Applications, Advances and Challenges)
Show Figures

Figure 1

Open AccessArticle
A System for Weeds and Crops Identification—Reaching over 10 FPS on Raspberry Pi with the Usage of MobileNets, DenseNet and Custom Modifications
Sensors 2019, 19(17), 3787; https://doi.org/10.3390/s19173787 - 31 Aug 2019
Viewed by 466
Abstract
Automated weeding is an important research area in agrorobotics. Weeds can be removed mechanically or with the precise usage of herbicides. Deep Learning techniques achieved state of the art results in many computer vision tasks, however their deployment on low-cost mobile computers is [...] Read more.
Automated weeding is an important research area in agrorobotics. Weeds can be removed mechanically or with the precise usage of herbicides. Deep Learning techniques achieved state of the art results in many computer vision tasks, however their deployment on low-cost mobile computers is still challenging. The described system contains several novelties, compared both with its previous version and related work. It is a part of a project of the automatic weeding machine, developed by the Warsaw University of Technology and MCMS Warka Ltd. Obtained models reach satisfying accuracy (detecting 47–67% of weed area, misclasifing as weed 0.1–0.9% of crop area) at over 10 FPS on the Raspberry Pi 3B+ computer. It was tested for four different plant species at different growth stadiums and lighting conditions. The system performing semantic segmentation is based on Convolutional Neural Networks. Its custom architecture combines U-Net, MobileNets, DenseNet and ResNet concepts. Amount of needed manual ground truth labels was significantly decreased by the usage of the knowledge distillation process, learning final model which mimics an ensemble of complex models on a large database of unlabeled data. Further decrease of the inference time was obtained by two custom modifications: in the usage of separable convolutions in DenseNet block and in the number of channels in each layer. In the authors’ opinion, the described novelties can be easily transferred to other agrorobotics tasks. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Open AccessArticle
Multi-Sensor Fusion Approach for Improving Map-Based Indoor Pedestrian Localization
Sensors 2019, 19(17), 3786; https://doi.org/10.3390/s19173786 - 31 Aug 2019
Viewed by 470
Abstract
The interior space of large-scale buildings, such as hospitals, with a variety of departments, is so complicated that people may easily lose their way while visiting. Difficulties in wayfinding can cause stress, anxiety, frustration and safety issues to patients and families. An indoor [...] Read more.
The interior space of large-scale buildings, such as hospitals, with a variety of departments, is so complicated that people may easily lose their way while visiting. Difficulties in wayfinding can cause stress, anxiety, frustration and safety issues to patients and families. An indoor navigation system including route planning and localization is utilized to guide people from one place to another. The localization of moving subjects is a critical-function component in an indoor navigation system. Pedestrian dead reckoning (PDR) is a technology that is widely employed for localization due to the advantage of being independent of infrastructure. To improve the accuracy of the localization system, combining different technologies is one of the solutions. In this study, a multi-sensor fusion approach is proposed to improve the accuracy of the PDR system by utilizing a light sensor, Bluetooth and map information. These simple mechanisms are applied to deal with the issue of accumulative error by identifying edge and sub-edge information from both Bluetooth and the light sensor. Overall, the accumulative error of the proposed multi-sensor fusion approach is below 65 cm in different cases of light arrangement. Compared to inertial sensor-based PDR system, the proposed multi-sensor fusion approach can improve 90% of the localization accuracy in an environment with an appropriate density of ceiling-mounted lamps. The results demonstrate that the proposed approach can improve the localization accuracy by utilizing multi-sensor data and fulfill the feasibility requirements of localization in an indoor navigation system. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Positioning and Navigation)
Show Figures

Figure 1

Open AccessArticle
User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole
Sensors 2019, 19(17), 3785; https://doi.org/10.3390/s19173785 - 31 Aug 2019
Viewed by 418
Abstract
Recent studies indicate that individuals can be identified by their gait pattern. A number of sensors including vision, acceleration, and pressure have been used to capture humans’ gait patterns, and a number of methods have been developed to recognize individuals from their gait [...] Read more.
Recent studies indicate that individuals can be identified by their gait pattern. A number of sensors including vision, acceleration, and pressure have been used to capture humans’ gait patterns, and a number of methods have been developed to recognize individuals from their gait pattern data. This study proposes a novel method of identifying individuals using null-space linear discriminant analysis on humans’ gait pattern data. The gait pattern data consists of time series pressure and acceleration data measured from multi-modal sensors in a smart insole used while walking. We compare the identification accuracies from three sensing modalities, which are acceleration, pressure, and both in combination. Experimental results show that the proposed multi-modal features identify 14 participants with high accuracy over 95% from their gait pattern data of walking. Full article
(This article belongs to the Special Issue Sensors for Gait Biometrics)
Show Figures

Figure 1

Previous Issue
Back to TopTop