Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = hearables

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 6180 KB  
Article
Sound Localization with Hearables in Transparency Mode
by Sebastian A. Ausili, Nathan Erthal, Christopher Bennett and Hillary A. Snapp
Audiol. Res. 2025, 15(3), 48; https://doi.org/10.3390/audiolres15030048 - 25 Apr 2025
Cited by 1 | Viewed by 1879
Abstract
Background: Transparency mode in hearables aims to maintain environmental awareness by transmitting external sounds through built-in microphones and speakers. While technical assessments have documented acoustic alterations in these devices, their impact on spatial hearing abilities under realistic listening conditions remains unclear. This study [...] Read more.
Background: Transparency mode in hearables aims to maintain environmental awareness by transmitting external sounds through built-in microphones and speakers. While technical assessments have documented acoustic alterations in these devices, their impact on spatial hearing abilities under realistic listening conditions remains unclear. This study aimed to evaluate how transparency mode affects sound localization performance with and without background noise. Methods: Ten normal-hearing adults completed sound localization tasks across azimuth (±90°) and elevation (±30°) with and without background noise. Performance was assessed with and without AirPods Pro in transparency mode. Sound localization performance was evaluated through linear regression analysis and mean absolute errors. Head-Related Transfer Function measurements quantified changes in binaural and spectral cues. Results: While interaural time differences were largely preserved, transparency mode introduced systematic alterations in level differences (up to 8 dB) and eliminated spectral cues above 5 kHz. These modifications resulted in increased localization errors, particularly for elevation perception and in noise. Mean absolute errors increased from 6.81° to 19.6° in azimuth and from 6.79° to 19.4° in elevation without background noise, with further degradation at lower SNRs (p < 0.05). Response times were affected by background noise (p < 0.001) but not by device use. Conclusions: Current transparency mode implementation significantly compromises spatial hearing abilities, particularly in noisy environments typical of everyday listening situations. These findings highlight the need for technological improvements in maintaining natural spatial cues through transparency mode, as current limitations may impact user safety and communication in real-world environments. Full article
Show Figures

Figure 1

15 pages, 1495 KB  
Article
Classification of Breathing Phase and Path with In-Ear Microphones
by Malahat H. K. Mehrban, Jérémie Voix and Rachel E. Bouserhal
Sensors 2024, 24(20), 6679; https://doi.org/10.3390/s24206679 - 17 Oct 2024
Cited by 3 | Viewed by 3086
Abstract
In recent years, the use of smart in-ear devices (hearables) for health monitoring has gained popularity. Previous research on in-ear breath monitoring with hearables uses signal processing techniques based on peak detection. Such techniques are greatly affected by movement artifacts and other challenging [...] Read more.
In recent years, the use of smart in-ear devices (hearables) for health monitoring has gained popularity. Previous research on in-ear breath monitoring with hearables uses signal processing techniques based on peak detection. Such techniques are greatly affected by movement artifacts and other challenging real-world conditions. In this study, we use an existing database of various breathing types captured using an in-ear microphone to classify breathing path and phase. Having a small dataset, we use XGBoost, a simple and fast classifier, to address three different classification challenges. We achieve an accuracy of 86.8% for a binary path classifier, 74.1% for a binary phase classifier, and 67.2% for a four-class path and phase classifier. Our path classifier outperforms existing algorithms in recall and F1, highlighting the reliability of our approach. This work demonstrates the feasibility of the use of hearables in continuous breath monitoring tasks with machine learning. Full article
(This article belongs to the Special Issue Sensors for Breathing Monitoring)
Show Figures

Figure 1

10 pages, 424 KB  
Article
Hearables: In-Ear Multimodal Data Fusion for Robust Heart Rate Estimation
by Marek Żyliński, Amir Nassibi, Edoardo Occhipinti, Adil Malik, Matteo Bermond, Harry J. Davies and Danilo P. Mandic
BioMedInformatics 2024, 4(2), 911-920; https://doi.org/10.3390/biomedinformatics4020051 - 1 Apr 2024
Cited by 9 | Viewed by 3212
Abstract
Background: Ambulatory heart rate (HR) monitors that acquire electrocardiogram (ECG) or/and photoplethysmographm (PPG) signals from the torso, wrists, or ears are notably less accurate in tasks associated with high levels of movement compared to clinical measurements. However, a reliable estimation of [...] Read more.
Background: Ambulatory heart rate (HR) monitors that acquire electrocardiogram (ECG) or/and photoplethysmographm (PPG) signals from the torso, wrists, or ears are notably less accurate in tasks associated with high levels of movement compared to clinical measurements. However, a reliable estimation of HR can be obtained through data fusion from different sensors. These methods are especially suitable for multimodal hearable devices, where heart rate can be tracked from different modalities, including electrical ECG, optical PPG, and sounds (heart tones). Combined information from different modalities can compensate for single source limitations. Methods: In this paper, we evaluate the possible application of data fusion methods in hearables. We assess data fusion for heart rate estimation from simultaneous in-ear ECG and in-ear PPG, recorded on ten subjects while performing 5-min sitting and walking tasks. Results: Our findings show that data fusion methods provide a similar level of mean absolute error as the best single-source heart rate estimation but with much lower intra-subject variability, especially during walking activities. Conclusion: We conclude that data fusion methods provide more robust HR estimation than a single cardiovascular signal. These methods can enhance the performance of wearable devices, especially multimodal hearables, in heart rate tracking during physical activity. Full article
Show Figures

Figure 1

36 pages, 21226 KB  
Article
Brain Wearables: Validation Toolkit for Ear-Level EEG Sensors
by Guilherme Correia, Michael J. Crosse and Alejandro Lopez Valdes
Sensors 2024, 24(4), 1226; https://doi.org/10.3390/s24041226 - 15 Feb 2024
Cited by 9 | Viewed by 6606
Abstract
EEG-enabled earbuds represent a promising frontier in brain activity monitoring beyond traditional laboratory testing. Their discrete form factor and proximity to the brain make them the ideal candidate for the first generation of discrete non-invasive brain–computer interfaces (BCIs). However, this new technology will [...] Read more.
EEG-enabled earbuds represent a promising frontier in brain activity monitoring beyond traditional laboratory testing. Their discrete form factor and proximity to the brain make them the ideal candidate for the first generation of discrete non-invasive brain–computer interfaces (BCIs). However, this new technology will require comprehensive characterization before we see widespread consumer and health-related usage. To address this need, we developed a validation toolkit that aims to facilitate and expand the assessment of ear-EEG devices. The first component of this toolkit is a desktop application (“EaR-P Lab”) that controls several EEG validation paradigms. This application uses the Lab Streaming Layer (LSL) protocol, making it compatible with most current EEG systems. The second element of the toolkit introduces an adaptation of the phantom evaluation concept to the domain of ear-EEGs. Specifically, it utilizes 3D scans of the test subjects’ ears to simulate typical EEG activity around and inside the ear, allowing for controlled assessment of different ear-EEG form factors and sensor configurations. Each of the EEG paradigms were validated using wet-electrode ear-EEG recordings and benchmarked against scalp-EEG measurements. The ear-EEG phantom was successful in acquiring performance metrics for hardware characterization, revealing differences in performance based on electrode location. This information was leveraged to optimize the electrode reference configuration, resulting in increased auditory steady-state response (ASSR) power. Through this work, an ear-EEG evaluation toolkit is made available with the intention to facilitate the systematic assessment of novel ear-EEG devices from hardware to neural signal acquisition. Full article
(This article belongs to the Special Issue Biomedical Electronics and Wearable Systems)
Show Figures

Figure 1

12 pages, 2072 KB  
Article
A Continuously Worn Dual Temperature Sensor System for Accurate Monitoring of Core Body Temperature from the Ear Canal
by Kyle D. Olson, Parker O’Brien, Andy S. Lin, David A. Fabry, Steve Hanke and Mark J. Schroeder
Sensors 2023, 23(17), 7323; https://doi.org/10.3390/s23177323 - 22 Aug 2023
Cited by 5 | Viewed by 6375
Abstract
The objective of this work was to develop a temperature sensor system that accurately measures core body temperature from an ear-worn device. Two digital temperature sensors were embedded in a hearing aid shell along the thermal gradient of the ear canal to form [...] Read more.
The objective of this work was to develop a temperature sensor system that accurately measures core body temperature from an ear-worn device. Two digital temperature sensors were embedded in a hearing aid shell along the thermal gradient of the ear canal to form a linear heat balance relationship. This relationship was used to determine best fit parameters for estimating body temperature. The predicted body temperatures resulted in intersubject limits of agreement (LOA) of ±0.49 °C over a range of physiologic and ambient temperatures without calibration. The newly developed hearing aid-based temperature sensor system can estimate core body temperature at an accuracy level equal to or better than many devices currently on the market. An accurate, continuously worn, temperature monitoring and tracking device may help provide early detection of illnesses, which could prove especially beneficial during pandemics and in the elderly demographic of hearing aid wearers. Full article
(This article belongs to the Special Issue Wearable and Unobtrusive Technologies for Healthcare Monitoring)
Show Figures

Figure 1

24 pages, 2213 KB  
Review
The Principles of Hearable Photoplethysmography Analysis and Applications in Physiological Monitoring–A Review
by Khalida Azudin, Kok Beng Gan, Rosmina Jaafar and Mohd Hasni Ja’afar
Sensors 2023, 23(14), 6484; https://doi.org/10.3390/s23146484 - 18 Jul 2023
Cited by 14 | Viewed by 8378
Abstract
Not long ago, hearables paved the way for biosensing, fitness, and healthcare monitoring. Smart earbuds today are not only producing sound but also monitoring vital signs. Reliable determination of cardiovascular and pulmonary system information can explore the use of hearables for physiological monitoring. [...] Read more.
Not long ago, hearables paved the way for biosensing, fitness, and healthcare monitoring. Smart earbuds today are not only producing sound but also monitoring vital signs. Reliable determination of cardiovascular and pulmonary system information can explore the use of hearables for physiological monitoring. Recent research shows that photoplethysmography (PPG) signals not only contain details on oxygen saturation level (SPO2) but also carry more physiological information including pulse rate, respiration rate, blood pressure, and arterial-related information. The analysis of the PPG signal from the ear has proven to be reliable and accurate in the research setting. (1) Background: The present integrative review explores the existing literature on an in-ear PPG signal and its application. This review aims to identify the current technology and usage of in-ear PPG and existing evidence on in-ear PPG in physiological monitoring. This review also analyzes in-ear (PPG) measurement configuration and principle, waveform characteristics, processing technology, and feature extraction characteristics. (2) Methods: We performed a comprehensive search to discover relevant in-ear PPG articles published until December 2022. The following electronic databases: Institute of Electrical and Electronics Engineers (IEEE), ScienceDirect, Scopus, Web of Science, and PubMed were utilized to conduct the studies addressing the evidence of in-ear PPG in physiological monitoring. (3) Results: Fourteen studies were identified but nine studies were finalized. Eight studies were on different principles and configurations of hearable PPG, and eight studies were on processing technology and feature extraction and its evidence in in-ear physiological monitoring. We also highlighted the limitations and challenges of using in-ear PPG in physiological monitoring. (4) Conclusions: The available evidence has revealed the future of in-ear PPG in physiological monitoring. We have also analyzed the potential limitation and challenges that in-ear PPG will face in processing the signal. Full article
(This article belongs to the Special Issue Advances in Light- and Sound-Based Techniques in Biomedicine)
Show Figures

Figure 1

14 pages, 8451 KB  
Article
An In-Ear PPG-Based Blood Glucose Monitor: A Proof-of-Concept Study
by Ghena Hammour and Danilo P. Mandic
Sensors 2023, 23(6), 3319; https://doi.org/10.3390/s23063319 - 21 Mar 2023
Cited by 31 | Viewed by 19597
Abstract
Monitoring diabetes saves lives. To this end, we introduce a novel, unobtrusive, and readily deployable in-ear device for the continuous and non-invasive measurement of blood glucose levels (BGLs). The device is equipped with a low-cost commercially available pulse oximeter whose infrared wavelength (880 [...] Read more.
Monitoring diabetes saves lives. To this end, we introduce a novel, unobtrusive, and readily deployable in-ear device for the continuous and non-invasive measurement of blood glucose levels (BGLs). The device is equipped with a low-cost commercially available pulse oximeter whose infrared wavelength (880 nm) is used for the acquisition of photoplethysmography (PPG). For rigor, we considered a full range of diabetic conditions (non-diabetic, pre-diabetic, type I diabetic, and type II diabetic). Recordings spanned nine different days, starting in the morning while fasting, up to a minimum of a two-hour period after eating a carbohydrate-rich breakfast. The BGLs from PPG were estimated using a suite of regression-based machine learning models, which were trained on characteristic features of PPG cycles pertaining to high and low BGLs. The analysis shows that, as desired, an average of 82% of the BGLs estimated from PPG lie in region A of the Clarke error grid (CEG) plot, with 100% of the estimated BGLs in the clinically acceptable CEG regions A and B. These results demonstrate the potential of the ear canal as a site for non-invasive blood glucose monitoring. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

19 pages, 10523 KB  
Article
Enabling Real-Time On-Chip Audio Super Resolution for Bone-Conduction Microphones
by Yuang Li, Yuntao Wang, Xin Liu, Yuanchun Shi, Shwetak Patel and Shao-Fu Shih
Sensors 2023, 23(1), 35; https://doi.org/10.3390/s23010035 - 20 Dec 2022
Cited by 13 | Viewed by 6916
Abstract
Voice communication using an air-conduction microphone in noisy environments suffers from the degradation of speech audibility. Bone-conduction microphones (BCM) are robust against ambient noises but suffer from limited effective bandwidth due to their sensing mechanism. Although existing audio super-resolution algorithms can recover the [...] Read more.
Voice communication using an air-conduction microphone in noisy environments suffers from the degradation of speech audibility. Bone-conduction microphones (BCM) are robust against ambient noises but suffer from limited effective bandwidth due to their sensing mechanism. Although existing audio super-resolution algorithms can recover the high-frequency loss to achieve high-fidelity audio, they require considerably more computational resources than is available in low-power hearable devices. This paper proposes the first-ever real-time on-chip speech audio super-resolution system for BCM. To accomplish this, we built and compared a series of lightweight audio super-resolution deep-learning models. Among all these models, ATS-UNet was the most cost-efficient because the proposed novel Audio Temporal Shift Module (ATSM) reduces the network’s dimensionality while maintaining sufficient temporal features from speech audio. Then, we quantized and deployed the ATS-UNet to low-end ARM micro-controller units for a real-time embedded prototype. The evaluation results show that our system achieved real-time inference speed on Cortex-M7 and higher quality compared with the baseline audio super-resolution method. Finally, we conducted a user study with ten experts and ten amateur listeners to evaluate our method’s effectiveness to human ears. Both groups perceived a significantly higher speech quality with our method when compared to the solutions with the original BCM or air-conduction microphone with cutting-edge noise-reduction algorithms. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Sensors and Applications)
Show Figures

Figure 1

22 pages, 3509 KB  
Article
RSSI Fingerprint Height Based Empirical Model Prediction for Smart Indoor Localization
by Wilford Arigye, Qiaolin Pu, Mu Zhou, Waqas Khalid and Muhammad Junaid Tahir
Sensors 2022, 22(23), 9054; https://doi.org/10.3390/s22239054 - 22 Nov 2022
Cited by 15 | Viewed by 3859
Abstract
Smart indoor living advances in the recent decade, such as home indoor localization and positioning, has seen a significant need for low-cost localization systems based on freely available resources such as Received Signal Strength Indicator by the dense deployment of Wireless Local Area [...] Read more.
Smart indoor living advances in the recent decade, such as home indoor localization and positioning, has seen a significant need for low-cost localization systems based on freely available resources such as Received Signal Strength Indicator by the dense deployment of Wireless Local Area Networks (WLAN). The off-the-shelf user equipment (UE’s) available at an affordable price across the globe are well equipped with the functionality to scan the radio access network for hearable single strength; in complex indoor environments, multiple signals can be received at a particular reference point with no consideration of the height of the transmitter and possible broadcasting coverage. Most effective fingerprinting algorithm solutions require specialized labor, are time-consuming to carry out site surveys, training of the data, big data analysis, and in most cases, additional hardware requirements relatively increase energy consumption and cost, not forgetting that in case of changes in the indoor environment will highly affect the fingerprint due to interferences. This paper experimentally evaluates and proposes a novel technique for Received Signal Indicator (RSSI) distance prediction, leveraging transceiver height, and Fresnel ranging in a complex indoor environment to better suit the path loss of RSSI at a particular Reference Point (RP) and time, which further contributes greatly to indoor localization. The experimentation in different complex indoor environments of the corridor and office lab during work hours to ascertain real-life and time feasibility shows that the technique’s accuracy is greatly improved in the office room and the corridor, achieving lower average prediction errors at low-cost than the comparison prediction algorithms. Compared with the conventional prediction techniques, for example, with Access Point 1 (AP1), the proposed Height Dependence Path–Loss (HEM) model at 0 dBm error attains a confidence probability of 10.98%, higher than the 2.65% for the distance dependence of Path–Loss New Empirical Model (NEM), 4.2% for the Multi-Wall dependence on Path-Loss (MWM) model, and 0% for the Conventional one-slope Path-Loss (OSM) model, respectively. Online localization, amongst the hearable APs, it is seen the proposed HEM fingerprint localization based on the proposed HEM prediction model attains a confidence probability of 31% at 3 m, 55% at 6 m, 78% at 9 m, outperforming the NEM with 26%, 43%, 62%, 62%, the MWM with 23%, 43%, 66%, respectively. The robustness of the HEM fingerprint using diverse predicted test samples by the NEM and MWM models indicates better localization of 13% than comparison fingerprints. Full article
(This article belongs to the Special Issue Feature Papers in Navigation and Positioning)
Show Figures

Figure 1

14 pages, 2789 KB  
Article
The Magnitude of the Frequency Jitter of Acoustic Waves Generated by Wind Instruments Is of Relevance for the Live Performance of Music
by Alexander M. Rehm
Acoustics 2021, 3(2), 411-424; https://doi.org/10.3390/acoustics3020027 - 12 Jun 2021
Cited by 2 | Viewed by 5562
Abstract
It is shown that a gold-plated device mounted on a tenor saxophone, forming a small bridge between the mouthpiece and the S-bow, can change two characteristics of the radiated sound: (1) the radiated acoustic energy of the harmonics with emission maxima around 1500–3000 [...] Read more.
It is shown that a gold-plated device mounted on a tenor saxophone, forming a small bridge between the mouthpiece and the S-bow, can change two characteristics of the radiated sound: (1) the radiated acoustic energy of the harmonics with emission maxima around 1500–3000 Hz, which is slightly reduced for tones played in the lower register of the saxophone; (2) the frequency jitter of all tones in the regular and upper register of the saxophone show a two-fold increase. Through simulated phase-shifted superimpositions of the recorded waves, it is shown that the cancellation of acoustic energy due to antiphase superimposition is significantly reduced in recordings with the bridge. Simulations with artificially generated acoustic waves confirm that acoustic waves with a certain systematic jitter show less cancelling of the acoustic energy under a phase-shifted superimposition, compared to acoustic waves with no frequency jitter; thus, being beneficial for live performances in small halls with minimal acoustic optimization. The data further indicate that the occasionally hearable “rumble” of a wind instrument orchestra with instruments showing slight differences in the frequency of the harmonics might be reduced (or avoided), if the radiated acoustic waves have a systematic jitter of a certain magnitude. Full article
Show Figures

Figure 1

12 pages, 893 KB  
Article
In-Ear SpO2: A Tool for Wearable, Unobtrusive Monitoring of Core Blood Oxygen Saturation
by Harry J. Davies, Ian Williams, Nicholas S. Peters and Danilo P. Mandic
Sensors 2020, 20(17), 4879; https://doi.org/10.3390/s20174879 - 28 Aug 2020
Cited by 80 | Viewed by 15426
Abstract
The non-invasive estimation of blood oxygen saturation (SpO2) by pulse oximetry is of vital importance clinically, from the detection of sleep apnea to the recent ambulatory monitoring of hypoxemia in the delayed post-infective phase of COVID-19. In this proof of concept [...] Read more.
The non-invasive estimation of blood oxygen saturation (SpO2) by pulse oximetry is of vital importance clinically, from the detection of sleep apnea to the recent ambulatory monitoring of hypoxemia in the delayed post-infective phase of COVID-19. In this proof of concept study, we set out to establish the feasibility of SpO2 measurement from the ear canal as a convenient site for long term monitoring, and perform a comprehensive comparison with the right index finger—the conventional clinical measurement site. During resting blood oxygen saturation estimation, we found a root mean square difference of 1.47% between the two measurement sites, with a mean difference of 0.23% higher SpO2 in the right ear canal. Using breath holds, we observe the known phenomena of time delay between central circulation and peripheral circulation with a mean delay between the ear and finger of 12.4 s across all subjects. Furthermore, we document the lower photoplethysmogram amplitude from the ear canal and suggest ways to mitigate this issue. In conjunction with the well-known robustness to temperature induced vasoconstriction, this makes conclusive evidence for in-ear SpO2 monitoring being both convenient and superior to conventional finger measurement for continuous non-intrusive monitoring in both clinical and everyday-life settings. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

16 pages, 1486 KB  
Review
A Survey on the Affordances of “Hearables”
by Joseph Plazak and Marta Kersten-Oertel
Inventions 2018, 3(3), 48; https://doi.org/10.3390/inventions3030048 - 14 Jul 2018
Cited by 33 | Viewed by 15775
Abstract
Recent developments pertaining to ear-mounted wearable computer interfaces (i.e., “hearables”) offer a number of distinct affordances over other wearable devices in ambient and ubiquitous computing systems. This paper provides a survey of hearables and the possibilities that they offer as computer interfaces. Thereafter, [...] Read more.
Recent developments pertaining to ear-mounted wearable computer interfaces (i.e., “hearables”) offer a number of distinct affordances over other wearable devices in ambient and ubiquitous computing systems. This paper provides a survey of hearables and the possibilities that they offer as computer interfaces. Thereafter, these affordances are examined with respect to other wearable interfaces. Finally, several historical trends are noted within this domain, and multiple paths for future development are offered. Full article
(This article belongs to the Special Issue Frontiers in Wearable Devices)
Show Figures

Figure 1

21 pages, 1714 KB  
Article
Personalizing the Fitting of Hearing Aids by Learning Contextual Preferences From Internet of Things Data
by Benjamin Johansen, Michael Kai Petersen, Maciej Jan Korzepa, Jan Larsen, Niels Henrik Pontoppidan and Jakob Eg Larsen
Computers 2018, 7(1), 1; https://doi.org/10.3390/computers7010001 - 23 Dec 2017
Cited by 20 | Viewed by 8327
Abstract
The lack of individualized fitting of hearing aids results in many patients never getting the intended benefits, in turn causing the devices to be left unused in a drawer. However, living with an untreated hearing loss has been found to be one of [...] Read more.
The lack of individualized fitting of hearing aids results in many patients never getting the intended benefits, in turn causing the devices to be left unused in a drawer. However, living with an untreated hearing loss has been found to be one of the leading lifestyle related causes of dementia and cognitive decline. Taking a radically different approach to personalize the fitting process of hearing aids, by learning contextual preferences from user-generated data, we in this paper outline the results obtained through a 9-month pilot study. Empowering the user to select between several settings using Internet of things (IoT) connected hearing aids allows for modeling individual preferences and thereby identifying distinct coping strategies. These behavioral patterns indicate that users prefer to switch between highly contrasting aspects of omnidirectionality and noise reduction dependent on the context, rather than relying on the medium “one size fits all” program frequently provided by default in hearing health care. We argue that an IoT approach facilitated by the usage of smartphones may constitute a paradigm shift, enabling continuous personalization of settings dependent on the changing context. Furthermore, making the user an active part of the fitting solution based on self-tracking may increase engagement and awareness and thus improve the quality of life for hearing impaired users. Full article
(This article belongs to the Special Issue Quantified Self and Personal Informatics)
Show Figures

Figure 1

15 pages, 650 KB  
Article
On the Choice of Access Point Selection Criterion and Other Position Estimation Characteristics for WLAN-Based Indoor Positioning
by Elina Laitinen and Elena Simona Lohan
Sensors 2016, 16(5), 737; https://doi.org/10.3390/s16050737 - 20 May 2016
Cited by 28 | Viewed by 5737
Abstract
The positioning based on Wireless Local Area Networks (WLAN) is one of the most promising technologies for indoor location-based services, generally using the information carried by Received Signal Strengths (RSS). One challenge, however, is the huge amount of data in the radiomap database [...] Read more.
The positioning based on Wireless Local Area Networks (WLAN) is one of the most promising technologies for indoor location-based services, generally using the information carried by Received Signal Strengths (RSS). One challenge, however, is the huge amount of data in the radiomap database due to the enormous number of hearable Access Points (AP) that could make the positioning system very complex. This paper concentrates on WLAN-based indoor location by comparing fingerprinting, path loss and weighted centroid based positioning approaches in terms of complexity and performance and studying the effects of grid size and AP reduction with several choices for appropriate selection criterion. All results are based on real field measurements in three multi-floor buildings. We validate our earlier findings concerning several different AP selection criteria and conclude that the best results are obtained with a maximum RSS-based criterion, which also proved to be the most consistent among the different investigated approaches. We show that the weighted centroid based low-complexity method is very sensitive to AP reduction, while the path loss-based method is also very robust to high percentage removals. Indeed, for fingerprinting, 50% of the APs can be removed safely with a properly chosen removal criterion without increasing the positioning error much. Full article
(This article belongs to the Special Issue Scalable Localization in Wireless Sensor Networks)
Show Figures

Graphical abstract

Back to TopTop