Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = RF microphone

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2794 KiB  
Article
Medical Data over Sound—CardiaWhisper Concept
by Radovan Stojanović, Jovan Đurković, Mihailo Vukmirović, Blagoje Babić, Vesna Miranović and Andrej Škraba
Sensors 2025, 25(15), 4573; https://doi.org/10.3390/s25154573 - 24 Jul 2025
Viewed by 313
Abstract
Data over sound (DoS) is an established technique that has experienced a resurgence in recent years, finding applications in areas such as contactless payments, device pairing, authentication, presence detection, toys, and offline data transfer. This study introduces CardiaWhisper, a system that extends the [...] Read more.
Data over sound (DoS) is an established technique that has experienced a resurgence in recent years, finding applications in areas such as contactless payments, device pairing, authentication, presence detection, toys, and offline data transfer. This study introduces CardiaWhisper, a system that extends the DoS concept to the medical domain by using a medical data-over-sound (MDoS) framework. CardiaWhisper integrates wearable biomedical sensors with home care systems, edge or IoT gateways, and telemedical networks or cloud platforms. Using a transmitter device, vital signs such as ECG (electrocardiogram) signals, PPG (photoplethysmogram) signals, RR (respiratory rate), and ACC (acceleration/movement) are sensed, conditioned, encoded, and acoustically transmitted to a nearby receiver—typically a smartphone, tablet, or other gadget—and can be further relayed to edge and cloud infrastructures. As a case study, this paper presents the real-time transmission and processing of ECG signals. The transmitter integrates an ECG sensing module, an encoder (either a PLL-based FM modulator chip or a microcontroller), and a sound emitter in the form of a standard piezoelectric speaker. The receiver, in the form of a mobile phone, tablet, or desktop computer, captures the acoustic signal via its built-in microphone and executes software routines to decode the data. It then enables a range of control and visualization functions for both local and remote users. Emphasis is placed on describing the system architecture and its key components, as well as the software methodologies used for signal decoding on the receiver side, where several algorithms are implemented using open-source, platform-independent technologies, such as JavaScript, HTML, and CSS. While the main focus is on the transmission of analog data, digital data transmission is also illustrated. The CardiaWhisper system is evaluated across several performance parameters, including functionality, complexity, speed, noise immunity, power consumption, range, and cost-efficiency. Quantitative measurements of the signal-to-noise ratio (SNR) were performed in various realistic indoor scenarios, including different distances, obstacles, and noise environments. Preliminary results are presented, along with a discussion of design challenges, limitations, and feasible applications. Our experience demonstrates that CardiaWhisper provides a low-power, eco-friendly alternative to traditional RF or Bluetooth-based medical wearables in various applications. Full article
Show Figures

Graphical abstract

17 pages, 1688 KiB  
Article
Modelling and Control of Longitudinal Vibrations in a Radio Frequency Cavity
by Mahsa Keikha, Jalal Taheri Kahnamouei and Mehrdad Moallem
Vibration 2024, 7(1), 129-145; https://doi.org/10.3390/vibration7010007 - 31 Jan 2024
Viewed by 1730
Abstract
Radio frequency (RF) cavities hold a crucial role in Electron Linear Accelerators, serving to provide precisely controlled accelerating fields. However, the susceptibility of these cavities to microphonic interference necessitates the development of effective controllers to mitigate vibration due to interference and disturbances. This [...] Read more.
Radio frequency (RF) cavities hold a crucial role in Electron Linear Accelerators, serving to provide precisely controlled accelerating fields. However, the susceptibility of these cavities to microphonic interference necessitates the development of effective controllers to mitigate vibration due to interference and disturbances. This paper undertakes an investigation into the modeling of RF cavities, treating them as cylindrical beams. To this end, a pseudo-rigid body model is employed to represent the translational vibration of the beam under various boundary conditions. The model is systematically analyzed using ANSYS software (from Ansys, Inc., Canonsburg, PA, USA, 2022). The study further delves into the controllability and observability of the proposed model, laying the foundation for the subsequent design of an observer-based controller geared towards suppressing longitudinal vibrations. The paper presents the design considerations and methodology for the controller. The performance of the proposed controller is evaluated via comprehensive simulations, providing valuable insights into its effectiveness in mitigating microphonic interference and enhancing the stability of RF cavities in Electron Linear Accelerators. Full article
Show Figures

Figure 1

19 pages, 552 KiB  
Article
SitPAA: Sitting Posture and Action Recognition Using Acoustic Sensing
by Yanxu Qu, Wei Gao and Chao Liu
Electronics 2024, 13(1), 40; https://doi.org/10.3390/electronics13010040 - 20 Dec 2023
Cited by 2 | Viewed by 1822
Abstract
The technologies associated with recognizing human sitting posture and actions primarily involve computer vision, sensors, and radio frequency (RF) methods. These approaches often involve handling substantial amounts of data, pose privacy concerns, and necessitate additional hardware deployment. With the emergence of acoustic perception [...] Read more.
The technologies associated with recognizing human sitting posture and actions primarily involve computer vision, sensors, and radio frequency (RF) methods. These approaches often involve handling substantial amounts of data, pose privacy concerns, and necessitate additional hardware deployment. With the emergence of acoustic perception in recent times, acoustic schemes have demonstrated applicability in diverse scenarios, including action recognition, object recognition, and target tracking. In this paper, we introduce SitPAA, a sitting posture and action recognition method based on acoustic waves. Notably, our method utilizes only a single speaker and microphone on a smart device for signal transmission and reception. We have implemented multiple rounds of denoising on the received signal and introduced a new feature extraction technique. These extracted features are fed into static and dynamic-oriented networks to achieve precise classification of five distinct poses and four different actions. Additionally, we employ cross-domain recognition to enhance the universality of the classification results. Through extensive experimental validation, our method has demonstrated notable performance, achieving an average accuracy of 92.08% for posture recognition and 95.1% for action recognition. This underscores the effectiveness of our approach in providing robust and accurate results in the challenging domains of posture and action recognition. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

31 pages, 20979 KiB  
Article
Give Me a Sign: Using Data Gloves for Static Hand-Shape Recognition
by Philipp Achenbach, Sebastian Laux, Dennis Purdack, Philipp Niklas Müller and Stefan Göbel
Sensors 2023, 23(24), 9847; https://doi.org/10.3390/s23249847 - 15 Dec 2023
Cited by 11 | Viewed by 2241
Abstract
Human-to-human communication via the computer is mainly carried out using a keyboard or microphone. In the field of virtual reality (VR), where the most immersive experience possible is desired, the use of a keyboard contradicts this goal, while the use of a microphone [...] Read more.
Human-to-human communication via the computer is mainly carried out using a keyboard or microphone. In the field of virtual reality (VR), where the most immersive experience possible is desired, the use of a keyboard contradicts this goal, while the use of a microphone is not always desirable (e.g., silent commands during task-force training) or simply not possible (e.g., if the user has hearing loss). Data gloves help to increase immersion within VR, as they correspond to our natural interaction. At the same time, they offer the possibility of accurately capturing hand shapes, such as those used in non-verbal communication (e.g., thumbs up, okay gesture, …) and in sign language. In this paper, we present a hand-shape recognition system using Manus Prime X data gloves, including data acquisition, data preprocessing, and data classification to enable nonverbal communication within VR. We investigate the impact on accuracy and classification time of using an outlier detection and a feature selection approach in our data preprocessing. To obtain a more generalized approach, we also studied the impact of artificial data augmentation, i.e., we created new artificial data from the recorded and filtered data to augment the training data set. With our approach, 56 different hand shapes could be distinguished with an accuracy of up to 93.28%. With a reduced number of 27 hand shapes, an accuracy of up to 95.55% could be achieved. The voting meta-classifier (VL2) proved to be the most accurate, albeit slowest, classifier. A good alternative is random forest (RF), which was even able to achieve better accuracy values in a few cases and was generally somewhat faster. outlier detection was proven to be an effective approach, especially in improving the classification time. Overall, we have shown that our hand-shape recognition system using data gloves is suitable for communication within VR. Full article
(This article belongs to the Special Issue Sensing Technology in Virtual Reality)
Show Figures

Figure 1

15 pages, 2383 KiB  
Article
Sphygmomanometer Dynamic Pressure Measurement Using a Condenser Microphone
by Žan Tomazini, Gregor Geršak and Samo Beguš
Sensors 2023, 23(19), 8340; https://doi.org/10.3390/s23198340 - 9 Oct 2023
Viewed by 3024
Abstract
There is a worldwide need to improve blood pressure (BP) measurement error in order to correctly diagnose hypertension. Cardiovascular diseases cause 17.9 million deaths annually and are a substantial monetary strain on healthcare. The current measurement uncertainty of 3 mmHg should be improved [...] Read more.
There is a worldwide need to improve blood pressure (BP) measurement error in order to correctly diagnose hypertension. Cardiovascular diseases cause 17.9 million deaths annually and are a substantial monetary strain on healthcare. The current measurement uncertainty of 3 mmHg should be improved upon. Dynamic pressure measurement standards are lacking or non-existing. In this study we propose a novel method of measuring air pressure inside the sphygmomanometer tubing during BP measurement using a condenser microphone. We designed, built, and tested a system that uses a radiofrequency (RF) modulation method to convert changes in capacitance of a condenser microphone into pressure signals. We tested the RF microphone with a low-frequency (LF) sound source, BP simulator and using a piezoresistive pressure sensor as a reference. Necessary tests were conducted to assess the uncertainty budget of the system. The RF microphone prototype has a working frequency range from 0.5 Hz to 280 Hz in the pressure range from 0 to 300 mmHg. The total expanded uncertainty (k = 2, p = 95.5%) of the RF microphone was 4.32 mmHg. The proposed method could establish traceability of BP measuring devices to acoustic standards described in IEC 61094-2 and could also be used in forming dynamic BP standards. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

24 pages, 1733 KiB  
Article
Radio Frequency Cavity’s Analytical Model and Control Design
by Mahsa Keikha, Jalal Taheri Kahnamouei and Mehrdad Moallem
Vibration 2023, 6(2), 319-342; https://doi.org/10.3390/vibration6020020 - 25 Mar 2023
Cited by 1 | Viewed by 1870
Abstract
Reduction or suppression of microphonic interference in radio frequency (RF) cavities, such as those used in Electron Linear Accelerators, is necessary to precisely control accelerating fields. In this paper, we investigate modeling the cavity as a cylindrical shell and present its free vibration [...] Read more.
Reduction or suppression of microphonic interference in radio frequency (RF) cavities, such as those used in Electron Linear Accelerators, is necessary to precisely control accelerating fields. In this paper, we investigate modeling the cavity as a cylindrical shell and present its free vibration analysis along with an appropriate control scheme to suppress vibrations. To this end, we first obtain an analytical mechanical dynamic model of a nine-cell cavity using a modified Fourier-Ritz method that provides a unified solution for cylindrical shell systems with general boundary conditions. The model is then verified using the ANSYS software in terms of a comparison of eigenfrequencies which prove to be identical to the proposed model. We also present an active observer-based vibration control scheme to suppress the dominant mechanical modes of the cavity. The control system performance is investigated using simulations. Full article
Show Figures

Figure 1

16 pages, 6095 KiB  
Article
Small UAS Online Audio DOA Estimation and Real-Time Identification Using Machine Learning
by Alexandros Kyritsis, Rodoula Makri and Nikolaos Uzunoglu
Sensors 2022, 22(22), 8659; https://doi.org/10.3390/s22228659 - 9 Nov 2022
Cited by 7 | Viewed by 3142
Abstract
The wide range of unmanned aerial system (UAS) applications has led to a substantial increase in their numbers, giving rise to a whole new area of systems aiming at detecting and/or mitigating their potentially unauthorized activities. The majority of these proposed solutions for [...] Read more.
The wide range of unmanned aerial system (UAS) applications has led to a substantial increase in their numbers, giving rise to a whole new area of systems aiming at detecting and/or mitigating their potentially unauthorized activities. The majority of these proposed solutions for countering the aforementioned actions (C-UAS) include radar/RF/EO/IR/acoustic sensors, usually working in coordination. This work introduces a small UAS (sUAS) acoustic detection system based on an array of microphones, easily deployable and with moderate cost. It continuously collects audio data and enables (a) the direction of arrival (DOA) estimation of the most prominent incoming acoustic signal by implementing a straightforward algorithmic process similar to triangulation and (b) identification, i.e., confirmation that the incoming acoustic signal actually emanates from a UAS, by exploiting sound spectrograms using machine-learning (ML) techniques. Extensive outdoor experimental sessions have validated this system’s efficacy for reliable UAS detection at distances exceeding 70 m. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing Based Acoustic Sensors)
Show Figures

Figure 1

21 pages, 4548 KiB  
Article
Acoustic- and Radio-Frequency-Based Human Activity Recognition
by Masoud Mohtadifar, Michael Cheffena and Alireza Pourafzal
Sensors 2022, 22(9), 3125; https://doi.org/10.3390/s22093125 - 19 Apr 2022
Cited by 10 | Viewed by 4553
Abstract
In this work, a hybrid radio frequency (RF)- and acoustic-based activity recognition system was developed to demonstrate the advantage of combining two non-invasive sensors in Human Activity Recognition (HAR) systems and smart assisted living. We used a hybrid approach, employing RF and acoustic [...] Read more.
In this work, a hybrid radio frequency (RF)- and acoustic-based activity recognition system was developed to demonstrate the advantage of combining two non-invasive sensors in Human Activity Recognition (HAR) systems and smart assisted living. We used a hybrid approach, employing RF and acoustic signals to recognize falling, walking, sitting on a chair, and standing up from a chair. To our knowledge, this is the first work that attempts to use a mixture of RF and passive acoustic signals for Human Activity Recognition purposes. We conducted experiments in the lab environment using a Vector Network Analyzer measuring the 2.4 GHz frequency band and a microphone array. After recording data, we extracted the Mel-spectrogram feature of the audio data and the Doppler shift feature of the RF measurements. We fed these features to six classification algorithms. Our result shows that using a hybrid acoustic- and radio-based method increases the accuracy of recognition compared to just using only one kind of sensory data and shows the possibility of expanding for a variety of other different activities that can be recognized. We demonstrate that by using a hybrid method, the recognition accuracy increases in all classification algorithms. Among these classifiers, five of them achieve over 98% recognition accuracy. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

16 pages, 1055 KiB  
Article
Acoustic Dual-Function Communication and Echo-Location in Inaudible Band
by Gabriele Allegro, Alessio Fascista and Angelo Coluccia
Sensors 2022, 22(3), 1284; https://doi.org/10.3390/s22031284 - 8 Feb 2022
Cited by 9 | Viewed by 3650
Abstract
Acoustic communications are experiencing renewed interest as alternative solutions to traditional RF communications, not only in RF-denied environments (such as underwater) but also in areas where the electromagnetic (EM) spectrum is heavily shared among several wireless systems. By introducing additional dedicated channels, independent [...] Read more.
Acoustic communications are experiencing renewed interest as alternative solutions to traditional RF communications, not only in RF-denied environments (such as underwater) but also in areas where the electromagnetic (EM) spectrum is heavily shared among several wireless systems. By introducing additional dedicated channels, independent from the EM ones, acoustic systems can be used to ensure the continuity of some critical services such as communication, localization, detection, and sensing. In this paper, we design and implement a novel acoustic system that uses only low-cost off-the-shelf hardware and the transmission of a single, suitably designed signal in the inaudible band (18–22 kHz) to perform integrated sensing (ranging) and communication. The experimental testbed consists of a common home speaker transmitting acoustic signals to a smartphone, which receives them through the integrated microphone, and of an additional receiver exploiting the same signals to estimate distance information from a physical obstacle in the environment. The performance of the proposed dual-function system in terms of noise, data rate, and accuracy in distance estimation is experimentally evaluated in a real operational environment. Full article
(This article belongs to the Special Issue Acoustic Sensing Systems and Their Applications in Smart Environments)
Show Figures

Figure 1

19 pages, 1455 KiB  
Article
Identification of Mobile Phones Using the Built-In Magnetometers Stimulated by Motion Patterns
by Gianmarco Baldini, Franc Dimc, Roman Kamnik, Gary Steri, Raimondo Giuliani and Claudio Gentile
Sensors 2017, 17(4), 783; https://doi.org/10.3390/s17040783 - 6 Apr 2017
Cited by 20 | Viewed by 6216
Abstract
We investigate the identification of mobile phones through their built-in magnetometers. These electronic components have started to be widely deployed in mass market phones in recent years, and they can be exploited to uniquely identify mobile phones due their physical differences, which appear [...] Read more.
We investigate the identification of mobile phones through their built-in magnetometers. These electronic components have started to be widely deployed in mass market phones in recent years, and they can be exploited to uniquely identify mobile phones due their physical differences, which appear in the digital output generated by them. This is similar to approaches reported in the literature for other components of the mobile phone, including the digital camera, the microphones or their RF transmission components. In this paper, the identification is performed through an inexpensive device made up of a platform that rotates the mobile phone under test and a fixed magnet positioned on the edge of the rotating platform. When the mobile phone passes in front of the fixed magnet, the built-in magnetometer is stimulated, and its digital output is recorded and analyzed. For each mobile phone, the experiment is repeated over six different days to ensure consistency in the results. A total of 10 phones of different brands and models or of the same model were used in our experiment. The digital output from the magnetometers is synchronized and correlated, and statistical features are extracted to generate a fingerprint of the built-in magnetometer and, consequently, of the mobile phone. A SVM machine learning algorithm is used to classify the mobile phones on the basis of the extracted statistical features. Our results show that inter-model classification (i.e., different models and brands classification) is possible with great accuracy, but intra-model (i.e., phones with different serial numbers and same model) classification is more challenging, the resulting accuracy being just slightly above random choice. Full article
(This article belongs to the Special Issue Magnetoelectric Heterostructures and Sensors)
Show Figures

Figure 1

12 pages, 6251 KiB  
Article
Automatic Taxonomic Classification of Fish Based on Their Acoustic Signals
by Juan J. Noda, Carlos M. Travieso and David Sánchez-Rodríguez
Appl. Sci. 2016, 6(12), 443; https://doi.org/10.3390/app6120443 - 17 Dec 2016
Cited by 34 | Viewed by 9481
Abstract
Fish as well as birds, mammals, insects and other animals are capable of emitting sounds for diverse purposes, which can be recorded through microphone sensors. Although fish vocalizations have been known for a long time, they have been poorly studied and applied in [...] Read more.
Fish as well as birds, mammals, insects and other animals are capable of emitting sounds for diverse purposes, which can be recorded through microphone sensors. Although fish vocalizations have been known for a long time, they have been poorly studied and applied in their taxonomic classification. This work presents a novel approach for automatic remote acoustic identification of fish through their acoustic signals by applying pattern recognition techniques. The sound signals are preprocessed and automatically segmented to extract each call from the background noise. Then, the calls are parameterized using Linear and Mel Frequency Cepstral Coefficients (LFCC and MFCC), Shannon Entropy (SE) and Syllable Length (SL), yielding useful information for the classification phase. In our experiments, 102 different fish species have been successfully identified with three widely used machine learning algorithms: K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Machine (SVM). Experimental results show an average classification accuracy of 95.24%, 93.56% and 95.58%, respectively. Full article
Show Figures

Figure 1

Back to TopTop