Previous Issue
Volume 21, July-2
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 21, Issue 15 (August-1 2021) – 177 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Miniaturization of Respiratory Measurement System in Artificial Ventilator for Small Animal Experiments to Reduce Dead Space and Its Application to Lung Elasticity Evaluation
Sensors 2021, 21(15), 5123; https://doi.org/10.3390/s21155123 (registering DOI) - 28 Jul 2021
Abstract
A respiratory measurement system composed of pressure and airflow sensors was introduced to precisely control the respiratory condition during animal experiments. The flow sensor was a hot-wire thermal airflow meter with a directional detection and airflow temperature change compensation function based on MEMS [...] Read more.
A respiratory measurement system composed of pressure and airflow sensors was introduced to precisely control the respiratory condition during animal experiments. The flow sensor was a hot-wire thermal airflow meter with a directional detection and airflow temperature change compensation function based on MEMS technology, and the pressure sensor was a commercially available one also produced by MEMS. The artificial dead space in the system was minimized to the value of 0.11 mL by integrating the two sensors on the same plate (26.0 mm × 15.0 mm). A balloon made of a silicone resin with a hardness of A30 was utilized as the simulated lung system and applied to the elasticity evaluation of the respiratory system in a living rat. The inside of the respiratory system was normally pressurized without damage, and we confirmed that the developed system was able to evaluate the elasticity of the lung tissue in the rat by using the pressure value obtained at the quasi-static conditions in the case of the ventilation in the animal experiments. Full article
(This article belongs to the Section Biomedical Sensors)
Review
Machine Learning for Authentication and Authorization in IoT: Taxonomy, Challenges and Future Research Direction
Sensors 2021, 21(15), 5122; https://doi.org/10.3390/s21155122 (registering DOI) - 28 Jul 2021
Abstract
With the ongoing efforts for widespread Internet of Things (IoT) adoption, one of the key factors hindering the wide acceptance of IoT is security. Securing IoT networks such as the electric power grid or water supply systems has emerged as a major national [...] Read more.
With the ongoing efforts for widespread Internet of Things (IoT) adoption, one of the key factors hindering the wide acceptance of IoT is security. Securing IoT networks such as the electric power grid or water supply systems has emerged as a major national and global priority. To address the security issue of IoT, several studies are being carried out that involve the use of, but are not limited to, blockchain, artificial intelligence, and edge/fog computing. Authentication and authorization are crucial aspects of the CIA triad to protect the network from malicious parties. However, existing authorization and authentication schemes are not sufficient for handling security, due to the scale of the IoT networks and the resource-constrained nature of devices. In order to overcome challenges due to various constraints of IoT networks, there is a significant interest in using machine learning techniques to assist in the authentication and authorization process for IoT. In this paper, recent advances in authentication and authorization techniques for IoT networks are reviewed. Based on the review, we present a taxonomy of authentication and authorization schemes in IoT focusing on machine learning-based schemes. Using the presented taxonomy, a thorough analysis is provided of the authentication and authorization (AA) security threats and challenges for IoT. Furthermore, various criteria to achieve a high degree of AA resiliency in IoT implementations to enhance IoT security are evaluated. Lastly, a detailed discussion on open issues, challenges, and future research directions is presented for enabling secure communication among IoT nodes. Full article
(This article belongs to the Special Issue Security and Privacy in the Internet of Things (IoT))
Show Figures

Figure 1

Article
Influence of the Antenna Orientation on WiFi-Based Fall Detection Systems
Sensors 2021, 21(15), 5121; https://doi.org/10.3390/s21155121 (registering DOI) - 28 Jul 2021
Abstract
The growing elderly population living independently demands remote systems for health monitoring. Falls are considered recurring fatal events and therefore have become a global health problem. Fall detection systems based on WiFi radio frequency signals still have limitations due to the difficulty of [...] Read more.
The growing elderly population living independently demands remote systems for health monitoring. Falls are considered recurring fatal events and therefore have become a global health problem. Fall detection systems based on WiFi radio frequency signals still have limitations due to the difficulty of differentiating the features of a fall from other similar activities. Additionally, the antenna orientation has not been taking into account as an influencing factor of classification performance. Therefore, we present in this paper an analysis of the classification performance in relation to the antenna orientation and the effects related to polarization and radiation pattern. Furthermore, the implementation of a device-free fall detection platform to collect empirical data on falls is shown. The platform measures the Doppler spectrum of a probe signal to extract the Doppler signatures generated by human movement and whose features can be used to identify falling events. The system explores two antenna polarization: horizontal and vertical. The accuracy reached by horizontal polarization is 92% with a false negative rate of 8%. Vertical polarization achieved 50% accuracy and false negatives rate. Full article
(This article belongs to the Special Issue Perception and Intelligence Driven Sensing to Monitor Personal Health)
Article
Novel Road Traffic Management Strategy for Rapid Clarification of the Emergency Vehicle Route Based on V2V Communications
Sensors 2021, 21(15), 5120; https://doi.org/10.3390/s21155120 (registering DOI) - 28 Jul 2021
Abstract
Vehicle-to-vehicle communication is a promising paradigm that enables all vehicles in the traffic road to communicate with each other to enhance traffic performance and increase road safety. Through vehicle-to-vehicle (V2V) communication, vehicles can understand the traffic conditions based on the information sent among [...] Read more.
Vehicle-to-vehicle communication is a promising paradigm that enables all vehicles in the traffic road to communicate with each other to enhance traffic performance and increase road safety. Through vehicle-to-vehicle (V2V) communication, vehicles can understand the traffic conditions based on the information sent among vehicles on the road. Due to the potential delay caused by traffic jams, emergency vehicles may not be able to reach their destination in the required time, leading to severe losses. The case is more severe especially in developing countries where no emergency-vehicle-dedicated lanes are allocated. In this study, a new emergency vehicle route-clarifying strategy is proposed. The new clarifying strategy is based on vehicular traffic management in different interference medium scenarios. The proposed model aims, through V2V communication, to find the nearest vehicle with which to communicate. This vehicle plays an important role in reducing the travel time: as the emergency message is received, this vehicle will immediately communicate with all the neighboring vehicles on the road. Based on V2V communications, all the vehicles in the road will clear from the lane in the road for the emergency vehicle can safely reach its destination with the minimum possible travel time. The maximum distance between the emergency vehicle and the nearest vehicle was determined under different channel conditions. The proposed strategy applied an optimization technique to find the varied road traffic parameters. The proposed traffic management strategy was evaluated and examined through different assumptions and several simulation scenarios. The obtained results validated the effectiveness and the accuracy of the proposed model, and also indicated significant improvement in the network’s performance in terms of packet delivery ratio (PDR) and average end-to-end delay (E2E). Full article
(This article belongs to the Special Issue Vehicle-to-Everything (V2X) Communications)
Systematic Review
Influence of Human Factors on Cyber Security within Healthcare Organisations: A Systematic Review
Sensors 2021, 21(15), 5119; https://doi.org/10.3390/s21155119 (registering DOI) - 28 Jul 2021
Abstract
Background: Cybersecurity is increasingly becoming a prominent concern among healthcare providers in adopting digital technologies for improving the quality of care delivered to patients. The recent reports on cyber attacks, such as ransomware and WannaCry, have brought to life the destructive nature of [...] Read more.
Background: Cybersecurity is increasingly becoming a prominent concern among healthcare providers in adopting digital technologies for improving the quality of care delivered to patients. The recent reports on cyber attacks, such as ransomware and WannaCry, have brought to life the destructive nature of such attacks upon healthcare. In complement to cyberattacks, which have been targeted against the vulnerabilities of information technology (IT) infrastructures, a new form of cyber attack aims to exploit human vulnerabilities; such attacks are categorised as social engineering attacks. Following an increase in the frequency and ingenuity of attacks launched against hospitals and clinical environments with the intention of causing service disruption, there is a strong need to study the level of awareness programmes and training activities offered to the staff by healthcare organisations. Objective: The objective of this systematic review is to identify commonly encountered factors that cybersecurity postures of a healthcare organisation, resulting from the ignorance of cyber threat to healthcare. The systematic review aims to consolidate the current literature being reported upon human behaviour resulting in security gaps that mitigate the cyber defence strategy adopted by healthcare organisations. Additionally, the paper also reviews the organisational risk assessment methodology implemented and the policies being adopted to strengthen cybersecurity. Methods: The topic of cybersecurity within healthcare and the clinical environment has attracted the interest of several researchers, resulting in a broad range of literature. The inclusion criteria for the articles in the review stem from the scope of the five research questions identified. To this end, we conducted seven search queries across three repositories, namely (i) PubMed®/MED-LINE; (ii) Cumulative Index to Nursing and Allied Health Literature (CINAHL); and (iii) Web of Science (WoS), using key words related to cybersecurity awareness, training, organisation risk assessment methodologies, policies and recommendations adopted as counter measures within health care. These were restricted to around the last 12 years. Results: A total of 70 articles were selected to be included in the review, which addresses the complexity of cybersecurity measures adopted within the healthcare and clinical environments. The articles included in the review highlight the evolving nature of cybersecurity threats stemming from exploiting IT infrastructures to more advanced attacks launched with the intent of exploiting human vulnerability. A steady increase in the literature on the threat of phishing attacks evidences the growing threat of social engineering attacks. As a countermeasure, through the review, we identified articles that provide methodologies resulting from case studies to promote cybersecurity awareness among stakeholders. The articles included highlight the need to adopt cyber hygiene practices among healthcare professionals while accessing social media platforms, which forms an ideal test bed for the attackers to gain insight into the life of healthcare professionals. Additionally, the review also includes articles that present strategies adopted by healthcare organisations in countering the impact of social engineering attacks. The evaluation of the cybersecurity risk assessment of an organisation is another key area of study reported in the literature that recommends the organisation of European and international standards in countering social engineering attacks. Lastly, the review includes articles reporting on national case studies with an overview of the economic and societal impact of service disruptions encountered due to cyberattacks. Discussion: One of the limitations of the review is the subjective ranking of the authors associated to the relevance of literature to each of the research questions identified. We also acknowledge the limited amount of literature that focuses on human factors of cybersecurity in health care in general; therefore, the search queries were formulated using well-established cybersecurity related topics categorised according to the threats, risk assessment and organisational strategies reported in the literature. Full article
(This article belongs to the Special Issue Cyber Situational Awareness in Computer Networks)
Article
Fabrication and Characterization of Flexible Capacitive Humidity Sensors based on Graphene Oxide on Porous PTFE Substrates
Sensors 2021, 21(15), 5118; https://doi.org/10.3390/s21155118 (registering DOI) - 28 Jul 2021
Abstract
Porous polytetrafluoroethylene (PTFE) is physically flexible, thermally and chemically stable, relatively inexpensive, and commercially available. It is attractive for various flexible sensors. This paper has studied flexible capacitive humidity sensors fabricated on porous PTFE substrates. Graphene oxide (GO) was used as a sensing [...] Read more.
Porous polytetrafluoroethylene (PTFE) is physically flexible, thermally and chemically stable, relatively inexpensive, and commercially available. It is attractive for various flexible sensors. This paper has studied flexible capacitive humidity sensors fabricated on porous PTFE substrates. Graphene oxide (GO) was used as a sensing material, both hydrophobic and hydrophilic porous PTFE as the substrates, and interdigitated electrodes on the PTFE substrates were screen-printed. SEM and Raman spectrum were utilized to characterize GO and PTFE. An ethanol soak process is developed to increase the yield of the humidity sensors based on hydrophobic porous PTFE substrates. Static and dynamic properties of these sensors are tested and analyzed. It demonstrates that the flexible capacitive humidity sensors fabricated on the ethanol-treated hydrophobic PTFE exhibit high sensitivity, small hysteresis, and fast response/recovery time. Full article
(This article belongs to the Section Wearables)
Article
A Motion Artifact Correction Procedure for fNIRS Signals Based on Wavelet Transform and Infrared Thermography Video Tracking
Sensors 2021, 21(15), 5117; https://doi.org/10.3390/s21155117 (registering DOI) - 28 Jul 2021
Abstract
Functional near infrared spectroscopy (fNIRS) is a neuroimaging technique that allows to monitor the functional hemoglobin oscillations related to cortical activity. One of the main issues related to fNIRS applications is the motion artefact removal, since a corrupted physiological signal is not correctly [...] Read more.
Functional near infrared spectroscopy (fNIRS) is a neuroimaging technique that allows to monitor the functional hemoglobin oscillations related to cortical activity. One of the main issues related to fNIRS applications is the motion artefact removal, since a corrupted physiological signal is not correctly indicative of the underlying biological process. A novel procedure for motion artifact correction for fNIRS signals based on wavelet transform and video tracking developed for infrared thermography (IRT) is presented. In detail, fNIRS and IRT were concurrently recorded and the optodes’ movement was estimated employing a video tracking procedure developed for IRT recordings. The wavelet transform of the fNIRS signal and of the optodes’ movement, together with their wavelet coherence, were computed. Then, the inverse wavelet transform was evaluated for the fNIRS signal excluding the frequency content corresponding to the optdes’ movement and to the coherence in the epochs where they were higher with respect to an established threshold. The method was tested using simulated functional hemodynamic responses added to real resting-state fNIRS recordings corrupted by movement artifacts. The results demonstrated the effectiveness of the procedure in eliminating noise, producing results with higher signal to noise ratio with respect to another validated method. Full article
(This article belongs to the Special Issue Biomedical Infrared Imaging: From Sensors to Applications Ⅱ)
Review
Survey and Performance Analysis of Deep Learning Based Object Detection in Challenging Environments
Sensors 2021, 21(15), 5116; https://doi.org/10.3390/s21155116 (registering DOI) - 28 Jul 2021
Abstract
Recent progress in deep learning has led to accurate and efficient generic object detection networks. Training of highly reliable models depends on large datasets with highly textured and rich images. However, in real-world scenarios, the performance of the generic object detection system decreases [...] Read more.
Recent progress in deep learning has led to accurate and efficient generic object detection networks. Training of highly reliable models depends on large datasets with highly textured and rich images. However, in real-world scenarios, the performance of the generic object detection system decreases when (i) occlusions hide the objects, (ii) objects are present in low-light images, or (iii) they are merged with background information. In this paper, we refer to all these situations as challenging environments. With the recent rapid development in generic object detection algorithms, notable progress has been observed in the field of deep learning-based object detection in challenging environments. However, there is no consolidated reference to cover the state of the art in this domain. To the best of our knowledge, this paper presents the first comprehensive overview, covering recent approaches that have tackled the problem of object detection in challenging environments. Furthermore, we present a quantitative and qualitative performance analysis of these approaches and discuss the currently available challenging datasets. Moreover, this paper investigates the performance of current state-of-the-art generic object detection algorithms by benchmarking results on the three well-known challenging datasets. Finally, we highlight several current shortcomings and outline future directions. Full article
(This article belongs to the Section Physical Sensors)
Article
An Advanced Sensor for Particles in Gases Using Dynamic Light Scattering in Air as Solvent
Sensors 2021, 21(15), 5115; https://doi.org/10.3390/s21155115 (registering DOI) - 28 Jul 2021
Abstract
Dynamic Light Scattering is a technique currently used to assess the particle size and size distribution by processing the scattered light intensity. Typically, the particles to be investigated are suspended in a liquid solvent. An analysis of the particular conditions required to perform [...] Read more.
Dynamic Light Scattering is a technique currently used to assess the particle size and size distribution by processing the scattered light intensity. Typically, the particles to be investigated are suspended in a liquid solvent. An analysis of the particular conditions required to perform a light scattering experiment on particles in air is presented in detail, together with a simple experimental setup and the data processing procedure. The results reveal that such an experiment is possible and using the setup and the procedure, both simplified to extreme, enables the design of an advanced sensor for particles and fumes that can output the average size of the particles in air. Full article
(This article belongs to the Special Issue Optical Sensing for Environmental Monitoring)
Review
A Review on the Development of Gold and Silver Nanoparticles-Based Biosensor as a Detection Strategy of Emerging and Pathogenic RNA Virus
Sensors 2021, 21(15), 5114; https://doi.org/10.3390/s21155114 (registering DOI) - 28 Jul 2021
Abstract
The emergence of highly pathogenic and deadly human coronaviruses, namely SARS-CoV and MERS-CoV within the past two decades and currently SARS-CoV-2, have resulted in millions of human death across the world. In addition, other human viral diseases, such as mosquito borne-viral diseases and [...] Read more.
The emergence of highly pathogenic and deadly human coronaviruses, namely SARS-CoV and MERS-CoV within the past two decades and currently SARS-CoV-2, have resulted in millions of human death across the world. In addition, other human viral diseases, such as mosquito borne-viral diseases and blood-borne viruses, also contribute to a higher risk of death in severe cases. To date, there is no specific drug or medicine available to cure these human viral diseases. Therefore, the early and rapid detection without compromising the test accuracy is required in order to provide a suitable treatment for the containment of the diseases. Recently, nanomaterials-based biosensors have attracted enormous interest due to their biological activities and unique sensing properties, which enable the detection of analytes such as nucleic acid (DNA or RNA), aptamers, and proteins in clinical samples. In addition, the advances of nanotechnologies also enable the development of miniaturized detection systems for point-of-care (POC) biosensors, which could be a new strategy for detecting human viral diseases. The detection of virus-specific genes by using single-stranded DNA (ssDNA) probes has become a particular interest due to their higher sensitivity and specificity compared to immunological methods based on antibody or antigen for early diagnosis of viral infection. Hence, this review has been developed to provide an overview of the current development of nanoparticles-based biosensors that target pathogenic RNA viruses, toward a robust and effective detection strategy of the existing or newly emerging human viral diseases such as SARS-CoV-2. This review emphasizes the nanoparticles-based biosensors developed using noble metals such as gold (Au) and silver (Ag) by virtue of their powerful characteristics as a signal amplifier or enhancer in the detection of nucleic acid. In addition, this review provides a broad knowledge with respect to several analytical methods involved in the development of nanoparticles-based biosensors for the detection of viral nucleic acid using both optical and electrochemical techniques. Full article
(This article belongs to the Section Biosensors)
Article
Relation-Based Deep Attention Network with Hybrid Memory for One-Shot Person Re-Identification
Sensors 2021, 21(15), 5113; https://doi.org/10.3390/s21155113 (registering DOI) - 28 Jul 2021
Abstract
One-shot person Re-identification, which owns one labeled sample among numerous unlabeled data for each identity, is proposed to tackle the problem of the shortage of labeled data. Considering the scenarios without sufficient labeled data, it is very challenging to keep abreast of the [...] Read more.
One-shot person Re-identification, which owns one labeled sample among numerous unlabeled data for each identity, is proposed to tackle the problem of the shortage of labeled data. Considering the scenarios without sufficient labeled data, it is very challenging to keep abreast of the performance of the supervised task in which sufficient labeled samples are available. In this paper, we propose a relation-based attention network with hybrid memory, which can make full use of the global information to pay attention to the identity features for model training with the relation-based attention network. Importantly, our specially designed network architecture effectively reduces the interference of environmental noise. Moreover, we propose a hybrid memory to train the one-shot data and unlabeled data in a unified framework, which notably contributes to the performance of person Re-identification. In particular, our designed one-shot feature update mode effectively alleviates the problem of overfitting, which is caused by the lack of supervised information during the training process. Compared with state-of-the-art unsupervised and one-shot algorithms for person Re-identification, our method achieves considerable improvements of 6.7%, 4.6%, and 11.5% on Market-1501, DukeMTMC-reID, and MSMT17 datasets, respectively, and becomes the new state-of-the-art method for one-shot person Re-identification. Full article
(This article belongs to the Special Issue Computer Vision Based Smart Sensing)
Article
A Frequency Modulation-Based Taxel Array: A Bio-Inspired Architecture for Large-Scale Artificial Skin
Sensors 2021, 21(15), 5112; https://doi.org/10.3390/s21155112 (registering DOI) - 28 Jul 2021
Abstract
This work introduces an array prototype based on a Frequency Modulation (FM) encoding architecture to transfer multiple sensor signals on a single wire. The use case presented adopts Hall-effect sensors as an example to represent a much larger range of sensor types (e.g., [...] Read more.
This work introduces an array prototype based on a Frequency Modulation (FM) encoding architecture to transfer multiple sensor signals on a single wire. The use case presented adopts Hall-effect sensors as an example to represent a much larger range of sensor types (e.g., proximity and temperature). This work aims to contribute to large area artificial skin systems which are a key element to enhance robotic platforms. Artificial skin will allow robotic platforms to have spatial awareness which will make interaction with objects and users safe. The FM-based architecture has been developed to address limitations in large-scale artificial skin scalability. Scalability issues include power requirements; number of wires needed; as well as frequency, density, and sensitivity bottlenecks. In this work, eight sensor signals are simultaneously acquired, transferred on a single wire and decoded in real-time. The overall taxel array current consumption is 36 mA. The work experimentally validates and demonstrates that different input signals can be effectively transferred using this approach minimizing wiring and power consumption of the taxel array. Four different tests using single as well as multiple stimuli are presented. Observations on performances, noise, and taxel array behaviour are reported. The results show that the taxel array is reliable and effective in detecting the applied stimuli. Full article
(This article belongs to the Section Sensors and Robotics)
Article
OFDMA Backoff Control Scheme for Improving Channel Efficiency in the Dynamic Network Environment of IEEE 802.11ax WLANs
Sensors 2021, 21(15), 5111; https://doi.org/10.3390/s21155111 (registering DOI) - 28 Jul 2021
Abstract
IEEE 802.11ax uplink orthogonal frequency division multiple access (OFDMA)-based random access (UORA) is a new feature for random channel access in wireless local area networks (WLANs). Similar to the legacy random access scheme in WLANs, UORA performs the OFDMA backoff (OBO) procedure to [...] Read more.
IEEE 802.11ax uplink orthogonal frequency division multiple access (OFDMA)-based random access (UORA) is a new feature for random channel access in wireless local area networks (WLANs). Similar to the legacy random access scheme in WLANs, UORA performs the OFDMA backoff (OBO) procedure to access the channel and decides on a random OBO counter within the OFDMA contention window (OCW) value. An access point (AP) can determine the OCW range and inform each station (STA) of it. However, how to determine a reasonable OCW range is beyond the scope of the IEEE 802.11ax standard. The OCW range is crucial to the UORA performance, and it primarily depends on the number of contending STAs, but it is challenging for the AP to accurately and quickly estimate or keep track of the number of contending STAs without the aid of a specific signaling mechanism. In addition, the one for this purpose incurs an additional delay and overhead in the channel access procedure. Therefore, the performance of a UORA scheme can be degraded by an improper OCW range, especially when the number of contending STAs changes dynamically. We first observed the effect of OCW values on channel efficiency and derived its optimal value from an analytical model. Next, we proposed a simple yet effective OBO control scheme where each STA determines its own OBO counter in a distributed manner rather than adjusting the OCW value globally. In the proposed scheme, each STA determines an appropriate OBO counter depending on whether the previous transmission was successful or not so that collisions can be mitigated without leaving OFDMA resource units unnecessarily idle. The results of a simulation study confirm that the throughput of the proposed scheme is comparable to the optimal OCW-based scheme and is improved by up to 15 times compared to the standard UORA scheme. Full article
(This article belongs to the Special Issue IEEE 802.11 and Wireless Sensors Network)
Article
Monitoring Soil and Ambient Parameters in the IoT Precision Agriculture Scenario: An Original Modeling Approach Dedicated to Low-Cost Soil Water Content Sensors
Sensors 2021, 21(15), 5110; https://doi.org/10.3390/s21155110 (registering DOI) - 28 Jul 2021
Abstract
A low power wireless sensor network based on LoRaWAN protocol was designed with a focus on the IoT low-cost Precision Agriculture applications, such as greenhouse sensing and actuation. All subsystems used in this research are designed by using commercial components and free or [...] Read more.
A low power wireless sensor network based on LoRaWAN protocol was designed with a focus on the IoT low-cost Precision Agriculture applications, such as greenhouse sensing and actuation. All subsystems used in this research are designed by using commercial components and free or open-source software libraries. The whole system was implemented to demonstrate the feasibility of a modular system built with cheap off-the-shelf components, including sensors. The experimental outputs were collected and stored in a database managed by a virtual machine running in a cloud service. The collected data can be visualized in real time by the user with a graphical interface. The reliability of the whole system was proven during a continued experiment with two natural soils, Loamy Sand and Silty Loam. Regarding soil parameters, the system performance has been compared with that of a reference sensor from Sentek. Measurements highlighted a good agreement for the temperature within the supposed accuracy of the adopted sensors and a non-constant sensitivity for the low-cost volumetric water contents (VWC) sensor. Finally, for the low-cost VWC sensor we implemented a novel procedure to optimize the parameters of the non-linear fitting equation correlating its analog voltage output with the reference VWC. Full article
(This article belongs to the Special Issue Wireless Sensing and Networking for the Internet of Things)
Show Figures

Figure 1

Article
Low-Cost Eye Tracking Calibration: A Knowledge-Based Study
Sensors 2021, 21(15), 5109; https://doi.org/10.3390/s21155109 (registering DOI) - 28 Jul 2021
Abstract
Subject calibration has been demonstrated to improve the accuracy in high-performance eye trackers. However, the true weight of calibration in off-the-shelf eye tracking solutions is still not addressed. In this work, a theoretical framework to measure the effects of calibration in deep learning-based [...] Read more.
Subject calibration has been demonstrated to improve the accuracy in high-performance eye trackers. However, the true weight of calibration in off-the-shelf eye tracking solutions is still not addressed. In this work, a theoretical framework to measure the effects of calibration in deep learning-based gaze estimation is proposed for low-resolution systems. To this end, features extracted from the synthetic U2Eyes dataset are used in a fully connected network in order to isolate the effect of specific user’s features, such as kappa angles. Then, the impact of system calibration in a real setup employing I2Head dataset images is studied. The obtained results show accuracy improvements over 50%, probing that calibration is a key process also in low-resolution gaze estimation scenarios. Furthermore, we show that after calibration accuracy values close to those obtained by high-resolution systems, in the range of 0.7, could be theoretically obtained if a careful selection of image features was performed, demonstrating significant room for improvement for off-the-shelf eye tracking systems. Full article
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)
Article
A Preference-Driven Smart Home Service for the Elderly’s Biophilic Experience
Sensors 2021, 21(15), 5108; https://doi.org/10.3390/s21155108 (registering DOI) - 28 Jul 2021
Abstract
Smart home services (SHS) should support the positive experiences of the elderly in homes with a focus on getting closer to nature. The study identified the services preferred by the elderly through a survey on the biophilic experience-based SHS, and to discuss the [...] Read more.
Smart home services (SHS) should support the positive experiences of the elderly in homes with a focus on getting closer to nature. The study identified the services preferred by the elderly through a survey on the biophilic experience-based SHS, and to discuss the configuration of the sensors and devices required to provide the service. We reorganized the biophilic experience-based SHS and related sensors and devices, focusing on our previous study, and developed a survey instrument. A preference survey was conducted on 250 adults aged 20 and older, and the SPSS program was used for a factor analysis and independent two-sample T-test. We derived six factors for biophilic experience-based SHS. Compared to other age groups, the elderly preferred services that were mainly attributed to factors such as ‘Immersion and interaction with nature’ (A), ‘Management of well-being and indoor environmental quality (IEQ)’ (B), and ‘Natural process and systems’ (F). We proposed 15 prioritized services, along with their sensor and device configurations, in consideration of service provision regarding the elderly’s preferences and universality. This study contributes to new developments in elderly-friendly smart home research by converting bio-friendly ideas into the market in the development of medical services and SHS for the elderly. Full article
Article
DC Voltage Sensorless Predictive Control of a High-Efficiency PFC Single-Phase Rectifier Based on the Versatile Buck-Boost Converter
Sensors 2021, 21(15), 5107; https://doi.org/10.3390/s21155107 (registering DOI) - 28 Jul 2021
Abstract
Many electronic power distribution systems have strong needs for highly efficient AC-DC conversion that can be satisfied by using a buck-boost converter at the core of the power factor correction (PFC) stage. These converters can regulate the input voltage in a wide range [...] Read more.
Many electronic power distribution systems have strong needs for highly efficient AC-DC conversion that can be satisfied by using a buck-boost converter at the core of the power factor correction (PFC) stage. These converters can regulate the input voltage in a wide range with reduced efforts compared to other solutions. As a result, buck-boost converters could potentially improve the efficiency in applications requiring DC voltages lower than the peak grid voltage. This paper compares SEPIC, noninverting, and versatile buck-boost converters as PFC single-phase rectifiers. The converters are designed for an output voltage of 200 V and an rms input voltage of 220 V at 3.2 kW. The PFC uses an inner discrete-time predictive current control loop with an output voltage regulator based on a sensorless strategy. A PLECS thermal simulation is performed to obtain the power conversion efficiency results for the buck-boost converters considered. Thermal simulations show that the versatile buck-boost (VBB) converter, currently unexplored for this application, can provide higher power conversion efficiency than SEPIC and non-inverting buck-boost converters. Finally, a hardware-in-the-loop (HIL) real-time simulation for the VBB converter is performed using a PLECS RT Box 1 device. At the same time, the proposed controller is built and then flashed to a low-cost digital signal controller (DSC), which corresponds to the Texas Instruments LAUNCHXL-F28069M evaluation board. The HIL real-time results verify the correctness of the theoretical analysis and the effectiveness of the proposed architecture to operate with high power conversion efficiency and to regulate the DC output voltage without sensing it while the sinusoidal input current is perfectly in-phase with the grid voltage. Full article
Article
Skin-Friction-Based Identification of the Critical Lines in a Transonic, High Reynolds Number Flow via Temperature-Sensitive Paint
Sensors 2021, 21(15), 5106; https://doi.org/10.3390/s21155106 (registering DOI) - 28 Jul 2021
Abstract
In this contribution, three methodologies based on temperature-sensitive paint (TSP) data were further developed and applied for the optical determination of the critical locations of flow separation and reattachment in compressible, high Reynolds number flows. The methodologies rely on skin-friction extraction approaches developed [...] Read more.
In this contribution, three methodologies based on temperature-sensitive paint (TSP) data were further developed and applied for the optical determination of the critical locations of flow separation and reattachment in compressible, high Reynolds number flows. The methodologies rely on skin-friction extraction approaches developed for low-speed flows, which were adapted in this work to study flow separation and reattachment in the presence of shockwave/boundary-layer interaction. In a first approach, skin-friction topological maps were obtained from time-averaged surface temperature distributions, thus enabling the identification of the critical lines as converging and diverging skin-friction lines. In the other two approaches, the critical lines were identified from the maps of the propagation celerity of temperature perturbations, which were determined from time-resolved TSP data. The experiments were conducted at a freestream Mach number of 0.72 and a chord Reynolds number of 9.7 million in the Transonic Wind Tunnel Göttingen on a VA-2 supercritical airfoil model, which was equipped with two exchangeable TSP modules specifically designed for transonic, high Reynolds number tests. The separation and reattachment lines identified via the three different TSP-based approaches were shown to be in mutual agreement, and were also found to be in agreement with reference experimental and numerical data. Full article
(This article belongs to the Special Issue Optical Sensors for Flow Diagnostics)
Article
Deep and Wide Transfer Learning with Kernel Matching for Pooling Data from Electroencephalography and Psychological Questionnaires
Sensors 2021, 21(15), 5105; https://doi.org/10.3390/s21155105 (registering DOI) - 28 Jul 2021
Abstract
Motor imagery (MI) promotes motor learning and encourages brain–computer interface systems that entail electroencephalogram (EEG) decoding. However, a long period of training is required to master brain rhythms’ self-regulation, resulting in users with MI inefficiency. We introduce a parameter-based approach of cross-subject transfer-learning [...] Read more.
Motor imagery (MI) promotes motor learning and encourages brain–computer interface systems that entail electroencephalogram (EEG) decoding. However, a long period of training is required to master brain rhythms’ self-regulation, resulting in users with MI inefficiency. We introduce a parameter-based approach of cross-subject transfer-learning to improve the performances of poor-performing individuals in MI-based BCI systems, pooling data from labeled EEG measurements and psychological questionnaires via kernel-embedding. To this end, a Deep and Wide neural network for MI classification is implemented to pre-train the network from the source domain. Then, the parameter layers are transferred to initialize the target network within a fine-tuning procedure to recompute the Multilayer Perceptron-based accuracy. To perform data-fusion combining categorical features with the real-valued features, we implement stepwise kernel-matching via Gaussian-embedding. Finally, the paired source–target sets are selected for evaluation purposes according to the inefficiency-based clustering by subjects to consider their influence on BCI motor skills, exploring two choosing strategies of the best-performing subjects (source space): single-subject and multiple-subjects. Validation results achieved for discriminant MI tasks demonstrate that the introduced Deep and Wide neural network presents competitive performance of accuracy even after the inclusion of questionnaire data. Full article
Show Figures

Figure 1

Communication
Using Off-the-Shelf Graphic Design Software for Validating the Operation of an Image Processing System
Sensors 2021, 21(15), 5104; https://doi.org/10.3390/s21155104 (registering DOI) - 28 Jul 2021
Abstract
Fluorescent markers are widely used to protect banknotes, passports, and other documents. Verification of such documents relies upon visual assessment of the markers revealed by ultraviolet (UV) radiation. However, such an explicit approach is inappropriate in certain circumstances, e.g., when discretely checking people [...] Read more.
Fluorescent markers are widely used to protect banknotes, passports, and other documents. Verification of such documents relies upon visual assessment of the markers revealed by ultraviolet (UV) radiation. However, such an explicit approach is inappropriate in certain circumstances, e.g., when discretely checking people for marks left by a pepper gel thrower. The UV light and fluorescent light must not be visible in such applications, yet reliable detection of the markers must still be performed. This problem was successfully resolved using TRIZ methodology, which led to a patent application. The main idea of the solution is to use low-intensity time-variable UV light for illuminating an object and process the image of the object acquired by a camera to detect colour changes too small to be noticed with the naked eye. This paper describes how popular graphics editors such as Adobe Photoshop Elements were used to validate the system concept devised. Simulation experiments used images taken in both visible and UV light to assess the effectiveness and perceptibility of the detection process. The advantage of such validation comes from using commodity software and performing the experiments without access to a laboratory and without physical samples, which makes this approach especially suitable in pandemic times. Full article
Article
Small-Target Complex-Scene Detection Method Based on Information Interworking High-Resolution Network
Sensors 2021, 21(15), 5103; https://doi.org/10.3390/s21155103 (registering DOI) - 28 Jul 2021
Abstract
The CNN (convolutional neural network)-based small target detection techniques for static complex scenes have been applied in many fields, but there are still certain technical challenges. This paper proposes a novel high-resolution small-target detection network named the IIHNet (information interworking high-resolution network) for [...] Read more.
The CNN (convolutional neural network)-based small target detection techniques for static complex scenes have been applied in many fields, but there are still certain technical challenges. This paper proposes a novel high-resolution small-target detection network named the IIHNet (information interworking high-resolution network) for complex scenes, which is based on information interworking processing technology. The proposed network not only can output a high-resolution presentation of a small target but can also keep the detection network simple and efficient. The key characteristic of the proposed network is that the target features are divided into three categories according to image resolution: high-resolution, medium-resolution, and low-resolution features. The basic features are extracted by convolution at the initial layer of the network. Then, convolution is carried out synchronously in the three resolution channels with information fusion in the horizontal and vertical directions of the network. At the same time, multiple utilizations and augmentations of feature information are carried out in the channel convolution direction. Experimental results show that the proposed network can achieve higher reasoning performance than the other compared networks without any compromise in terms of the detection effect. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Article
An Optimized Nature-Inspired Metaheuristic Algorithm for Application Mapping in 2D-NoC
Sensors 2021, 21(15), 5102; https://doi.org/10.3390/s21155102 (registering DOI) - 28 Jul 2021
Abstract
Mapping application task graphs on intellectual property (IP) cores into network-on-chip (NoC) is a non-deterministic polynomial-time hard problem. The evolution of network performance mainly depends on an effective and efficient mapping technique and the optimization of performance and cost metrics. These metrics mainly [...] Read more.
Mapping application task graphs on intellectual property (IP) cores into network-on-chip (NoC) is a non-deterministic polynomial-time hard problem. The evolution of network performance mainly depends on an effective and efficient mapping technique and the optimization of performance and cost metrics. These metrics mainly include power, reliability, area, thermal distribution and delay. A state-of-the-art mapping technique for NoC is introduced with the name of sailfish optimization algorithm (SFOA). The proposed algorithm minimizes the power dissipation of NoC via an empirical base applying a shared k-nearest neighbor clustering approach, and it gives quicker mapping over six considered standard benchmarks. The experimental results indicate that the proposed techniques outperform other existing nature-inspired metaheuristic approaches, especially in large application task graphs. Full article
(This article belongs to the Section Communications)
Article
Pragmatic Micrometre to Millimetre Calibration Using Multiple Methods for Low-Coherence Interferometer in Embedded Metrology Applications
Sensors 2021, 21(15), 5101; https://doi.org/10.3390/s21155101 (registering DOI) - 28 Jul 2021
Abstract
In-situ metrology utilised for surface topography, texture and form analysis along with quality control processes requires a high-level of reliability. Hence, a traceable method for calibrating the measurement system’s transfer function is required at regular intervals. This paper compares three methods of dimensional [...] Read more.
In-situ metrology utilised for surface topography, texture and form analysis along with quality control processes requires a high-level of reliability. Hence, a traceable method for calibrating the measurement system’s transfer function is required at regular intervals. This paper compares three methods of dimensional calibration for a spectral domain low coherence interferometer using a reference laser interferometer versus two types of single material measure. Additionally, the impact of dataset sparsity is shown along with the effect of using a singular calibration dataset for system performance when operating across different media. Full article
(This article belongs to the Section Optical Sensors)
Article
A Deep Neural Network-Based Multi-Frequency Path Loss Prediction Model from 0.8 GHz to 70 GHz
Sensors 2021, 21(15), 5100; https://doi.org/10.3390/s21155100 (registering DOI) - 28 Jul 2021
Abstract
Large-scale fading models play an important role in estimating radio coverage, optimizing base station deployments and characterizing the radio environment to quantify the performance of wireless networks. In recent times, multi-frequency path loss models are attracting much interest due to their expected support [...] Read more.
Large-scale fading models play an important role in estimating radio coverage, optimizing base station deployments and characterizing the radio environment to quantify the performance of wireless networks. In recent times, multi-frequency path loss models are attracting much interest due to their expected support for both sub-6 GHz and higher frequency bands in future wireless networks. Traditionally, linear multi-frequency path loss models like the ABG model have been considered, however such models lack accuracy. The path loss model based on a deep learning approach is an alternative method to traditional linear path loss models to overcome the time-consuming path loss parameters predictions based on the large dataset at new frequencies and new scenarios. In this paper, we proposed a feed-forward deep neural network (DNN) model to predict path loss of 13 different frequencies from 0.8 GHz to 70 GHz simultaneously in an urban and suburban environment in a non-line-of-sight (NLOS) scenario. We investigated a broad range of possible values for hyperparameters to search for the best set of ones to obtain the optimal architecture of the proposed DNN model. The results show that the proposed DNN-based path loss model improved mean square error (MSE) by about 6 dB and achieved higher prediction accuracy R2 compared to the multi-frequency ABG path loss model. The paper applies the XGBoost algorithm to evaluate the importance of the features for the proposed model and the related impact on the path loss prediction. In addition, the effect of hyperparameters, including activation function, number of hidden neurons in each layer, optimization algorithm, regularization factor, batch size, learning rate, and momentum, on the performance of the proposed model in terms of prediction error and prediction accuracy are also investigated. Full article
Show Figures

Figure 1

Article
Spiral SAR Imaging with Fast Factorized Back-Projection: A Phase Error Analysis
Sensors 2021, 21(15), 5099; https://doi.org/10.3390/s21155099 (registering DOI) - 28 Jul 2021
Abstract
This paper presents a fast factorized back-projection (FFBP) algorithm that can satisfactorily process real P-band synthetic aperture radar (SAR) data collected from a spiral flight pattern performed by a drone-borne SAR system. Choosing the best setup when processing SAR data with an FFBP [...] Read more.
This paper presents a fast factorized back-projection (FFBP) algorithm that can satisfactorily process real P-band synthetic aperture radar (SAR) data collected from a spiral flight pattern performed by a drone-borne SAR system. Choosing the best setup when processing SAR data with an FFBP algorithm is not so straightforward, so predicting how this choice will affect the quality of the output image is valuable information. This paper provides a statistical phase error analysis to validate the hypothesis that the phase error standard deviation can be predicted by geometric parameters specified at the start of processing. In particular, for a phase error standard deviation of ~12°, the FFBP is up to 21 times faster than the direct back-projection algorithm for 3D images and up to 13 times faster for 2D images. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Simulation and Processing)
Show Figures

Figure 1

Article
A Soft Tactile Sensor Based on Magnetics and Hybrid Flexible-Rigid Electronics
Sensors 2021, 21(15), 5098; https://doi.org/10.3390/s21155098 (registering DOI) - 28 Jul 2021
Abstract
Tactile sensing is crucial for robots to manipulate objects successfully. However, integrating tactile sensors into robotic hands is still challenging, mainly due to the need to cover small multi-curved surfaces with several components that must be miniaturized. In this paper, we report the [...] Read more.
Tactile sensing is crucial for robots to manipulate objects successfully. However, integrating tactile sensors into robotic hands is still challenging, mainly due to the need to cover small multi-curved surfaces with several components that must be miniaturized. In this paper, we report the design of a novel magnetic-based tactile sensor to be integrated into the robotic hand of the humanoid robot Vizzy. We designed and fabricated a flexible 4 × 2 matrix of Si chips of magnetoresistive spin valve sensors that, coupled with a single small magnet, can measure contact forces from 0.1 to 5 N on multiple locations over the surface of a robotic fingertip; this design is innovative with respect to previous works in the literature, and it is made possible by careful engineering and miniaturization of the custom-made electronic components that we employ. In addition, we characterize the behavior of the sensor through a COMSOL simulation, which can be used to generate optimized designs for sensors with different geometries. Full article
(This article belongs to the Special Issue Sensors for Robotic Applications)
Article
A Two-Level Speaker Identification System via Fusion of Heterogeneous Classifiers and Complementary Feature Cooperation
Sensors 2021, 21(15), 5097; https://doi.org/10.3390/s21155097 - 28 Jul 2021
Abstract
We present a new architecture to address the challenges of speaker identification that arise in interaction of humans with social robots. Though deep learning systems have led to impressive performance in many speech applications, limited speech data at training stage and short utterances [...] Read more.
We present a new architecture to address the challenges of speaker identification that arise in interaction of humans with social robots. Though deep learning systems have led to impressive performance in many speech applications, limited speech data at training stage and short utterances with background noise at test stage present challenges and are still open problems as no optimum solution has been reported to date. The proposed design employs a generative model namely the Gaussian mixture model (GMM) and a discriminative model—support vector machine (SVM) classifiers as well as prosodic features and short-term spectral features to concurrently classify a speaker’s gender and his/her identity. The proposed architecture works in a semi-sequential manner consisting of two stages: the first classifier exploits the prosodic features to determine the speaker’s gender which in turn is used with the short-term spectral features as inputs to the second classifier system in order to identify the speaker. The second classifier system employs two types of short-term spectral features; namely mel-frequency cepstral coefficients (MFCC) and gammatone frequency cepstral coefficients (GFCC) as well as gender information as inputs to two different classifiers (GMM and GMM supervector-based SVM) which in total leads to construction of four classifiers. The outputs from the second stage classifiers; namely GMM-MFCC maximum likelihood classifier (MLC), GMM-GFCC MLC, GMM-MFCC supervector SVM, and GMM-GFCC supervector SVM are fused at score level by the weighted Borda count approach. The weight factors are computed on the fly via Mamdani fuzzy inference system that its inputs are the signal to noise ratio and the length of utterance. Experimental evaluations suggest that the proposed architecture and the fusion framework are promising and can improve the recognition performance of the system in challenging environments where the signal-to-noise ratio is low, and the length of utterance is short; such scenarios often arise in social robot interactions with humans. Full article
(This article belongs to the Special Issue Biomedical Signal and Image Processing in Speech Analysis)
Show Figures

Figure 1

Article
MobChain: Three-Way Collusion Resistance in Witness-Oriented Location Proof Systems Using Distributed Consensus
Sensors 2021, 21(15), 5096; https://doi.org/10.3390/s21155096 - 28 Jul 2021
Abstract
Smart devices have accentuated the importance of geolocation information. Geolocation identification using smart devices has paved the path for incentive-based location-based services (LBS). However, a user’s full control over a smart device can allow tampering of the location proof. Witness-oriented location proof systems [...] Read more.
Smart devices have accentuated the importance of geolocation information. Geolocation identification using smart devices has paved the path for incentive-based location-based services (LBS). However, a user’s full control over a smart device can allow tampering of the location proof. Witness-oriented location proof systems (LPS) have emerged to resist the generation of false proofs and mitigate collusion attacks. However, witness-oriented LPS are still susceptible to three-way collusion attacks (involving the user, location authority, and the witness). To overcome the threat of three-way collusion in existing schemes, we introduce a decentralized consensus protocol called MobChain in this paper. In this scheme the selection of a witness and location authority is achieved through a distributed consensus of nodes in an underlying P2P network that establishes a private blockchain. The persistent provenance data over the blockchain provides strong security guarantees; as a result, the forging and manipulation of location becomes impractical. MobChain provides secure location provenance architecture, relying on decentralized decision making for the selection of participants of the protocol thereby addressing the three-way collusion problem. Our prototype implementation and comparison with the state-of-the-art solutions show that MobChain is computationally efficient and highly available while improving the security of LPS. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Article
A Performance Benchmark for Dedicated Short-Range Communications and LTE-Based Cellular-V2X in the Context of Vehicle-to-Infrastructure Communication and Urban Scenarios
Sensors 2021, 21(15), 5095; https://doi.org/10.3390/s21155095 - 28 Jul 2021
Abstract
For more than a decade, communication systems based on the IEEE 802.11p technology—often referred to as Dedicated Short-Range Communications (DSRC)—have been considered a de facto industry standard for Vehicle-to-Infrastructure (V2I) communication. The technology, however, is often criticized for its poor scalability, its suboptimal [...] Read more.
For more than a decade, communication systems based on the IEEE 802.11p technology—often referred to as Dedicated Short-Range Communications (DSRC)—have been considered a de facto industry standard for Vehicle-to-Infrastructure (V2I) communication. The technology, however, is often criticized for its poor scalability, its suboptimal channel access method, and the need to install additional roadside infrastructure. In 3GPP Release 14, the functionality of existing cellular networks has been extended to support V2X use cases in an attempt to address the well-known drawbacks of the DSRC. In this paper, we present a complex simulation study in order to benchmark both technologies in a V2I communication context and an urban scenario. In particular, we compare the DSRC, LTE in the infrastructural mode (LTE-I), and LTE Device-to-Device (LTE-D2D) mode 3 in terms of the average end-to-end delay and Packet Delivery Ratio (PDR) under varying communication conditions achieved through the variation of the communication perimeter, message generation frequency, and road traffic intensity. The obtained results are put into the context of the networking and connectivity requirements of the most popular V2I C-ITS services. The simulation results indicate that only the DSRC technology is able to support the investigated V2I communication scenarios without any major limitations, achieving an average end-to-end delay of less than 100 milliseconds and a PDR above 96% in all of the investigated simulation scenarios. The LTE-I is applicable for the most of the low-frequency V2I services in a limited communication perimeter (<600 m) and for lower traffic intensities (<1000 vehicles per hour), achieving a delay pf less than 500 milliseconds and a PDR of up to 92%. The LTE-D2D in mode 3 achieves too great of an end-to-end delay (above 1000 milliseconds) and a PDR below 72%; thus, it is not suitable for the V2I services under consideration in a perimeter larger than 200 m. Moreover, the LTE-D2D mode 3 is very sensitive to the distance between the transmitter and its serving eNodeB, which heavily impacts the PDR achieved. Full article
Show Figures

Figure 1

Communication
Aircraft Detection Using Phase-Sensitive Optical-Fiber OTDR
Sensors 2021, 21(15), 5094; https://doi.org/10.3390/s21155094 - 28 Jul 2021
Abstract
Aircraft detection plays a vital role in aviation management and safe operation in the aviation system. Phase-Sensitive Optical Time Domain Reflectometry (Φ-OTDR) technology is a prevailing sensing method in geophysics research, structure inspection, transportation detection, etc. Compared with existing video- or radio-based detection [...] Read more.
Aircraft detection plays a vital role in aviation management and safe operation in the aviation system. Phase-Sensitive Optical Time Domain Reflectometry (Φ-OTDR) technology is a prevailing sensing method in geophysics research, structure inspection, transportation detection, etc. Compared with existing video- or radio-based detection methods, Φ-OTDR is cost-effective, suitable for long-distance detection, and resistant to severe weather conditions. We present a detection system using Φ-OTDR technology and analyze the character of the acoustic signal of aircraft. Instead of runway monitoring in the airport or noise detection in the air, this study focuses on the detection of seismic vibration signal excited by the sound of aircraft. The Chebyshev filter is adopted to eliminate the impact of background noise and random noise from the original vibration signal; the short-time Fourier transform is used for time-frequency analysis. The experimental results showed that the seismic vibration signal excited by the aircraft sound is mainly low-frequency, which is under 5 Hz. Time delay of aircraft vibration signal in different locations of the optic fiber is recorded by the sensing system. The Doppler effect is also revealed by the time-domain analysis: the frequency increases when the aircraft is approaching and decreases when the aircraft moves away. Full article
(This article belongs to the Special Issue Distributed Optical Fiber Sensors: Applications and Technology)
Show Figures

Figure 1

Previous Issue
Back to TopTop