Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (161)

Search Parameters:
Keywords = Wi-Fi channel state information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 16941 KiB  
Article
KAN-Sense: Keypad Input Recognition via CSI Feature Clustering and KAN-Based Classifier
by Minseok Koo and Jaesung Park
Electronics 2025, 14(15), 2965; https://doi.org/10.3390/electronics14152965 - 24 Jul 2025
Viewed by 279
Abstract
Wi-Fi sensing leverages variations in CSI (channel state information) to infer human activities in a contactless and low-cost manner, with growing applications in smart homes, healthcare, and security. While deep learning has advanced macro-motion sensing tasks, micro-motion sensing such as keypad stroke recognition [...] Read more.
Wi-Fi sensing leverages variations in CSI (channel state information) to infer human activities in a contactless and low-cost manner, with growing applications in smart homes, healthcare, and security. While deep learning has advanced macro-motion sensing tasks, micro-motion sensing such as keypad stroke recognition remains underexplored due to subtle inter-class CSI variations and significant intra-class variance. These challenges make it difficult for existing deep learning models typically relying on fully connected MLPs to accurately recognize keypad inputs. To address the issue, we propose a novel approach that combines a discriminative feature extractor with a Kolmogorov–Arnold Network (KAN)-based classifier. The combined model is trained to reduce intra-class variability by clustering features around class-specific centers. The KAN classifier learns nonlinear spline functions to efficiently delineate the complex decision boundaries between different keypad inputs with fewer parameters. To validate our method, we collect a CSI dataset with low-cost Wi-Fi devices (ESP8266 and Raspberry Pi 4) in a real-world keypad sensing environment. Experimental results verify the effectiveness and practicality of our method for keypad input sensing applications in that it outperforms existing approaches in sensing accuracy while requiring fewer parameters. Full article
Show Figures

Figure 1

18 pages, 9571 KiB  
Article
TCN-MAML: A TCN-Based Model with Model-Agnostic Meta-Learning for Cross-Subject Human Activity Recognition
by Chih-Yang Lin, Chia-Yu Lin, Yu-Tso Liu, Yi-Wei Chen, Hui-Fuang Ng and Timothy K. Shih
Sensors 2025, 25(13), 4216; https://doi.org/10.3390/s25134216 - 6 Jul 2025
Viewed by 338
Abstract
Human activity recognition (HAR) using Wi-Fi-based sensing has emerged as a powerful, non-intrusive solution for monitoring human behavior in smart environments. Unlike wearable sensor systems that require user compliance, Wi-Fi channel state information (CSI) enables device-free recognition by capturing variations in signal propagation [...] Read more.
Human activity recognition (HAR) using Wi-Fi-based sensing has emerged as a powerful, non-intrusive solution for monitoring human behavior in smart environments. Unlike wearable sensor systems that require user compliance, Wi-Fi channel state information (CSI) enables device-free recognition by capturing variations in signal propagation caused by human motion. This makes Wi-Fi sensing highly attractive for ambient healthcare, security, and elderly care applications. However, real-world deployment faces two major challenges: (1) significant cross-subject signal variability due to physical and behavioral differences among individuals, and (2) limited labeled data, which restricts model generalization. To address these sensor-related challenges, we propose TCN-MAML, a novel framework that integrates temporal convolutional networks (TCN) with model-agnostic meta-learning (MAML) for efficient cross-subject adaptation in data-scarce conditions. We evaluate our approach on a public Wi-Fi CSI dataset using a strict cross-subject protocol, where training and testing subjects do not overlap. The proposed TCN-MAML achieves 99.6% accuracy, demonstrating superior generalization and efficiency over baseline methods. Experimental results confirm the framework’s suitability for low-power, real-time HAR systems embedded in IoT sensor networks. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Object Detection and Recognition)
Show Figures

Figure 1

21 pages, 1204 KiB  
Article
Multi-Task Learning for Joint Indoor Localization and Blind Channel Estimation in OFDM Systems
by Maria Camila Molina, Iness Ahriz, Lounis Zerioul and Michel Terré
Sensors 2025, 25(13), 4095; https://doi.org/10.3390/s25134095 - 30 Jun 2025
Viewed by 393
Abstract
In contemporary wireless communication systems, achieving precise localization of communicating devices and accurate channel estimation is crucial for enhancing operational efficiency and reliability. This study introduces a novel approach that integrates the localization task and channel estimation into a single framework. We present [...] Read more.
In contemporary wireless communication systems, achieving precise localization of communicating devices and accurate channel estimation is crucial for enhancing operational efficiency and reliability. This study introduces a novel approach that integrates the localization task and channel estimation into a single framework. We present a multi-task neural network architecture capable of simultaneously estimating channels from multiple base stations in a blind manner while estimating user terminal coordinates in given indoor environments. This approach exploits the relationship between channel characteristics and spatial information, using the same channel state information (CSI) data to perform both tasks with a single model. We evaluate the proposed solution, assessing its effectiveness across differing antenna spacing configurations and indoor test environments using both WiFi and 5G orthogonal frequency-division multiplexing (OFDM) systems. The results show performance benefits, achieving comparable channel estimation results to other studies while simultaneously providing a localization estimate, resulting in reduced model overhead while leveraging spatial context. The presented system demonstrates potential to improve the efficiency of communication systems in real-world applications, aligning with the goals of emerging integrated sensing and communication (ISAC) systems. Results based on experimental data using the proposed solution show a 50th percentile localization error of 1.62 m for 3-tap channels and 0.89 m for 10-tap channels. Full article
Show Figures

Figure 1

27 pages, 8848 KiB  
Article
Empirical Investigation on Practical Robustness of Keystroke Recognition Using WiFi Sensing for Future IoT Applications
by Haoming Wang, Aryan Sharma, Deepak Mishra, Aruna Seneviratne and Eliathamby Ambikairajah
Future Internet 2025, 17(7), 288; https://doi.org/10.3390/fi17070288 - 27 Jun 2025
Viewed by 252
Abstract
The widespread use of WiFi Internet-of-Things (IoT) devices has rendered them valuable tools for detecting information about the physical environment. Recent studies have demonstrated that WiFi Channel State Information (CSI) can detect physical events like movement, occupancy increases, and gestures. This paper empirically [...] Read more.
The widespread use of WiFi Internet-of-Things (IoT) devices has rendered them valuable tools for detecting information about the physical environment. Recent studies have demonstrated that WiFi Channel State Information (CSI) can detect physical events like movement, occupancy increases, and gestures. This paper empirically investigates the conditions under which WiFi sensing technology remains effective for keystroke detection. To achieve this timely goal of assessing whether it can raise any privacy concerns, experiments are conducted using commodity hardware to predict the accuracy of WiFi CSI in detecting keys pressed on a keyboard. Our novel results show that, in an ideal setting with a robotic arm, the position of a specific key can be predicted with 99% accuracy using a simple machine learning classifier. Furthermore, human finger localisation over a key and actual key-press recognition is also successfully achieved, with 94% and 89% reduced accuracy values, respectively. Moreover, our detailed investigation reveals that to ensure high accuracy, the gap distance between each test object must be substantial, while the size of the test group should be limited. Finally, we show WiFi sensing technology has limitations in small-scale gesture recognition for generic settings where proper device positioning is crucial. Specifically, detecting keyed words achieves an overall accuracy of 94% for the forefinger and 87% for multiple fingers when only the right hand is used. Accuracy drops to 56% when using both hands. We conclude WiFi sensing is effective in controlled indoor environments, but it has limitations due to the device location and the limited granularity of sensing objects. Full article
Show Figures

Graphical abstract

25 pages, 2711 KiB  
Article
Enhancing Multi-User Activity Recognition in an Indoor Environment with Augmented Wi-Fi Channel State Information and Transformer Architectures
by MD Irteeja Kobir, Pedro Machado, Ahmad Lotfi, Daniyal Haider and Isibor Kennedy Ihianle
Sensors 2025, 25(13), 3955; https://doi.org/10.3390/s25133955 - 25 Jun 2025
Viewed by 386
Abstract
Human Activity Recognition (HAR) is crucial for understanding human behaviour through sensor data, with applications in healthcare, smart environments, and surveillance. While traditional HAR often relies on ambient sensors, wearable devices or vision-based systems, these approaches can face limitations in dynamic settings and [...] Read more.
Human Activity Recognition (HAR) is crucial for understanding human behaviour through sensor data, with applications in healthcare, smart environments, and surveillance. While traditional HAR often relies on ambient sensors, wearable devices or vision-based systems, these approaches can face limitations in dynamic settings and raise privacy concerns. Device-free HAR systems, utilising Wi-Fi Channel State Information (CSI) to human movements, have emerged as a promising privacy-preserving alternative for next-generation health activity monitoring and smart environments, particularly for multi-user scenarios. However, current research faces challenges such as the need for substantial annotated training data, class imbalance, and poor generalisability in complex, multi-user environments where labelled data is often scarce. This paper addresses these gaps by proposing a hybrid deep learning approach which integrates signal preprocessing, targeted data augmentation, and a customised integration of CNN and Transformer models, designed to address the challenges of multi-user recognition and data scarcity. A random transformation technique to augment real CSI data, followed by hybrid feature extraction involving statistical, spectral, and entropy-based measures to derive suitable representations from temporal sensory input, is employed. Experimental results show that the proposed model outperforms several baselines in single-user and multi-user contexts. Our findings demonstrate that combining real and augmented data significantly improves model generalisation in scenarios with limited labelled data. Full article
(This article belongs to the Special Issue Sensors and Data Analysis for Biomechanics and Physical Activity)
Show Figures

Figure 1

29 pages, 2186 KiB  
Article
WiPIHT: A WiFi-Based Position-Independent Passive Indoor Human Tracking System
by Xu Xu, Xilong Che, Xianqiu Meng, Long Li, Ziqi Liu and Shuai Shao
Sensors 2025, 25(13), 3936; https://doi.org/10.3390/s25133936 - 24 Jun 2025
Viewed by 437
Abstract
Unlike traditional vision-based camera tracking, human indoor localization and activity trajectory recognition also employ other methods such as infrared tracking, acoustic localization, and locators. These methods have significant environmental limitations or dependency on specialized equipment. Currently, WiFi-based human sensing is a novel and [...] Read more.
Unlike traditional vision-based camera tracking, human indoor localization and activity trajectory recognition also employ other methods such as infrared tracking, acoustic localization, and locators. These methods have significant environmental limitations or dependency on specialized equipment. Currently, WiFi-based human sensing is a novel and important method for human activity recognition. However, most WiFi-based activity recognition methods have limitations, such as using WiFi fingerprints to identify human activities. They either require extensive sample collection and training, are constrained by a fixed environmental layout, or rely on the precise positioning of transmitters (TXs) and receivers (RXs) within the space. If the positions are uncertain, or change, the sensing performance becomes unstable. To address the dependency of current WiFi indoor human activity trajectory reconstruction on the TX-RX position, we propose WiPIHT, a stable system for tracking indoor human activity trajectories using a small number of commercial WiFi devices. This system does not require additional hardware to be carried or locators to be attached, enabling passive, real-time, and accurate tracking and trajectory reconstruction of indoor human activities. WiPIHT is based on an innovative CSI channel analysis method, analyzing its autocorrelation function to extract location-independent real-time movement speed features of the human body. It also incorporates Fresnel zone and motion velocity direction decomposition to extract movement direction change patterns independent of the relative position between the TX-RX and the human body. By combining real-time speed and direction curve features, the system derives the shape of the human movement trajectory. Experiments demonstrate that, compared to existing methods, our system can accurately reconstruct activity trajectory shapes even without knowing the initial positions of the TX or the human body. Additionally, our system shows significant advantages in tracking accuracy, real-time performance, equipment, and cost. Full article
(This article belongs to the Special Issue Recent Advances in Smart Mobile Sensing Technology)
Show Figures

Figure 1

22 pages, 3671 KiB  
Article
SE-WiGR: A WiFi Gesture Recognition Approach Incorporating the Squeeze–Excitation Mechanism and VGG16
by Fenfang Li, Chujie Weng and Yongguang Liang
Appl. Sci. 2025, 15(11), 6346; https://doi.org/10.3390/app15116346 - 5 Jun 2025
Viewed by 406
Abstract
With advancements in IoT and smart home tech, WiFi-driven gesture recognition is attracting more focus due to its non-contact nature and user-friendly design. However, WiFi signals are affected by multipath effects, attenuation, and interference, resulting in complex and variable signal patterns that pose [...] Read more.
With advancements in IoT and smart home tech, WiFi-driven gesture recognition is attracting more focus due to its non-contact nature and user-friendly design. However, WiFi signals are affected by multipath effects, attenuation, and interference, resulting in complex and variable signal patterns that pose challenges for accurately modeling gesture characteristics. This study proposes SE-WiGR, an innovative WiFi gesture recognition method to address these challenges. First, channel state information (CSI) related to gesture actions is collected using commercial WiFi devices. Next, the data is preprocessed, and Doppler-shift image data is extracted as input for the network model. Finally, the method integrates the squeeze-and-excitation (SE) mechanism with the VGG16 network to classify gestures. The method achieves a recognition accuracy of 94.12% across multiple scenarios, outperforming the standalone VGG16 network by 4.13%. This improvement confirms that the SE module effectively enhances gesture feature extraction while suppressing background noise. Full article
Show Figures

Figure 1

26 pages, 7006 KiB  
Article
Cross-Environment Device-Free Human Action Recognition via Wi-Fi Signals
by Sai Zhang, Yi Zhong, Haoge Jia, Xue Ding and Ting Jiang
Electronics 2025, 14(11), 2299; https://doi.org/10.3390/electronics14112299 - 5 Jun 2025
Viewed by 406
Abstract
Human action recognition (HAR) based on Wi-Fi signals has become a research hotspot due to its advantages of privacy protection, a comfortable experience, and a reliable recognition effect. However, the performance of existing Wi-Fi-based HAR systems is vulnerable to changes in environments and [...] Read more.
Human action recognition (HAR) based on Wi-Fi signals has become a research hotspot due to its advantages of privacy protection, a comfortable experience, and a reliable recognition effect. However, the performance of existing Wi-Fi-based HAR systems is vulnerable to changes in environments and shows poor system generalization capabilities. In this paper, we propose a cross-environment HAR system (CHARS) based on the channel state information (CSI) of Wi-Fi signals for the recognition of human activities in different indoor environments. To achieve good performance for cross-environment HAR, a two-stage action recognition method is proposed. In the first stage, an HAR adversarial network is designed to extract robust action features independent of environments. Through the maximum–minimum learning scheme, the aim is to narrow the distribution gap between action features extracted from the source and the target (i.e., new) environments without using any label information from the target environment, which is beneficial for the generalization of the cross-environment HAR system. In the second stage, a self-training strategy is introduced to further extract action recognition information from the target environment and perform secondary optimization, enhancing the overall performance of the cross-environment HAR system. The results of experiments show that the proposed system achieves more reliable performance in target environments, demonstrating the generalization ability of the proposed CHARS to environmental changes. Full article
(This article belongs to the Special Issue Advances in Wireless Communication for loT)
Show Figures

Figure 1

32 pages, 4040 KiB  
Article
Self-Supervised WiFi-Based Identity Recognition in Multi-User Smart Environments
by Hamada Rizk and Ahmed Elmogy
Sensors 2025, 25(10), 3108; https://doi.org/10.3390/s25103108 - 14 May 2025
Cited by 1 | Viewed by 711
Abstract
The deployment of autonomous AI agents in smart environments has accelerated the need for accurate and privacy-preserving human identification. Traditional vision-based solutions, while effective in capturing spatial and contextual information, often face challenges related to high deployment costs, privacy concerns, and susceptibility to [...] Read more.
The deployment of autonomous AI agents in smart environments has accelerated the need for accurate and privacy-preserving human identification. Traditional vision-based solutions, while effective in capturing spatial and contextual information, often face challenges related to high deployment costs, privacy concerns, and susceptibility to environmental variations. To address these limitations, we propose IdentiFi, a novel AI-driven human identification system that leverages WiFi-based wireless sensing and contrastive learning techniques. IdentiFi utilizes self-supervised and semi-supervised learning to extract robust, identity-specific representations from Channel State Information (CSI) data, effectively distinguishing between individuals even in dynamic, multi-occupant settings. The system’s temporal and contextual contrasting modules enhance its ability to model human motion and reduce multi-user interference, while class-aware contrastive learning minimizes the need for extensive labeled datasets. Extensive evaluations demonstrate that IdentiFi outperforms existing methods in terms of scalability, adaptability, and privacy preservation, making it highly suitable for AI agents in smart homes, healthcare facilities, security systems, and personalized services. Full article
(This article belongs to the Special Issue Multi-Agent Sensors Systems and Their Applications)
Show Figures

Figure 1

32 pages, 8687 KiB  
Article
Hybrid Deep Learning Methods for Human Activity Recognition and Localization in Outdoor Environments
by Yirga Yayeh Munaye, Metadel Addis, Yenework Belayneh, Atinkut Molla and Wasyihun Admass
Algorithms 2025, 18(4), 235; https://doi.org/10.3390/a18040235 - 18 Apr 2025
Viewed by 843
Abstract
Activity recognition and localization in outdoor environments involve identifying and tracking human movements using sensor data, computer vision, or deep learning techniques. This process is crucial for applications such as smart surveillance, autonomous systems, healthcare monitoring, and human–computer interaction. However, several challenges arise [...] Read more.
Activity recognition and localization in outdoor environments involve identifying and tracking human movements using sensor data, computer vision, or deep learning techniques. This process is crucial for applications such as smart surveillance, autonomous systems, healthcare monitoring, and human–computer interaction. However, several challenges arise in outdoor settings, including varying lighting conditions, occlusions caused by obstacles, environmental noise, and the complexity of differentiating between similar activities. This study presents a hybrid deep learning approach that integrates human activity recognition and localization in outdoor environments using Wi-Fi signal data. The study focuses on applying the hybrid long short-term memory–bi-gated recurrent unit (LSTM-BIGRU) architecture, designed to enhance the accuracy of activity recognition and location estimation. Moreover, experiments were conducted using a real-world dataset collected with the PicoScene Wi-Fi sensing device, which captures both magnitude and phase information. The results demonstrated a significant improvement in accuracy for both activity recognition and localization tasks. To mitigate data scarcity, this study utilized the conditional tabular generative adversarial network (CTGAN) to generate synthetic channel state information (CSI) data. Additionally, carrier frequency offset (CFO) and cyclic shift delay (CSD) preprocessing techniques were implemented to mitigate phase fluctuations. The experiments were conducted in a line-of-sight (LoS) outdoor environment, where CSI data were collected using the PicoScene Wi-Fi sensor platform across four different activities at outdoor locations. Finally, a comparative analysis of the experimental results highlights the superior performance of the proposed hybrid LSTM-BIGRU model, achieving 99.81% and 98.93% accuracy for activity recognition and location prediction, respectively. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 1297 KiB  
Article
Multi-Granularity and Multi-Modal Feature Fusion for Indoor Positioning
by Lijuan Ye, Yi Wang, Shenglei Pei, Yu Wang, Hong Zhao and Shi Dong
Symmetry 2025, 17(4), 597; https://doi.org/10.3390/sym17040597 - 15 Apr 2025
Viewed by 473
Abstract
Despite the widespread adoption of indoor positioning technology, the existing solutions still face significant challenges. On one hand, Wi-Fi-based positioning struggles to balance accuracy and efficiency in complex indoor environments and architectural layouts formed by pre-existing access points (APs). On the other hand, [...] Read more.
Despite the widespread adoption of indoor positioning technology, the existing solutions still face significant challenges. On one hand, Wi-Fi-based positioning struggles to balance accuracy and efficiency in complex indoor environments and architectural layouts formed by pre-existing access points (APs). On the other hand, vision-based methods, while offering high-precision potential, are hindered by prohibitive costs associated with binocular camera systems required for depth image acquisition, limiting their large-scale deployment. Additionally, channel state information (CSI), containing multi-subcarrier data, maintains amplitude symmetry in ideal free-space conditions but becomes susceptible to periodic positioning errors in real environments due to multipath interference. Meanwhile, image-based positioning often suffers from spatial ambiguity in texture-repeated areas. To address these challenges, we propose a novel hybrid indoor positioning method that integrates multi-granularity and multi-modal features. By fusing CSI data with visual information, the system leverages spatial consistency constraints from images to mitigate CSI error fluctuations while utilizing CSI’s global stability to correct local ambiguities in image-based positioning. In the initial coarse-grained positioning phase, a neural network model is trained using image data to roughly localize indoor scenes. This model adeptly captures the geometric relationships within images, providing a foundation for more precise localization in subsequent stages. In the fine-grained positioning stage, CSI features from Wi-Fi signals and Scale-Invariant Feature Transform (SIFT) features from image data are fused, creating a rich feature fusion fingerprint library that enables high-precision positioning. The experimental results show that our proposed method synergistically combines the strengths of Wi-Fi fingerprints and visual positioning, resulting in a substantial enhancement in positioning accuracy. Specifically, our approach achieves an accuracy of 0.4 m for 45% of positioning points and 0.8 m for 67% of points. Overall, this approach charts a promising path forward for advancing indoor positioning technology. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

17 pages, 7673 KiB  
Article
Motion Pattern Recognition via CNN-LSTM-Attention Model Using Array-Based Wi-Fi CSI Sensors in GNSS-Denied Areas
by Ming Xia, Shengmao Que, Nanzhu Liu, Qu Wang and Tuan Li
Electronics 2025, 14(8), 1594; https://doi.org/10.3390/electronics14081594 - 15 Apr 2025
Cited by 1 | Viewed by 996
Abstract
Human activity recognition (HAR) is vital for applications in fields such as smart homes, health monitoring, and navigation, particularly in GNSS-denied environments where satellite signals are obstructed. Wi-Fi channel state information (CSI) has emerged as a key technology for HAR due to its [...] Read more.
Human activity recognition (HAR) is vital for applications in fields such as smart homes, health monitoring, and navigation, particularly in GNSS-denied environments where satellite signals are obstructed. Wi-Fi channel state information (CSI) has emerged as a key technology for HAR due to its wide coverage, low cost, and non-reliance on wearable devices. However, existing methods face challenges including significant data fluctuations, limited feature extraction capabilities, and difficulties in recognizing complex movements. This study presents a novel solution by integrating a multi-sensor array of Wi-Fi CSI with deep learning techniques to overcome these challenges. We propose a 2 × 2 array of Wi-Fi CSI sensors, which collects synchronized data from all channels within the CSI receivable range, improving data stability and providing reliable positioning in GNSS-denied environments. Using the CNN-LSTM-attention (C-L-A) framework, this method combines short- and long-term motion features, enhancing recognition accuracy. Experimental results show 98.2% accuracy, demonstrating superior recognition performance compared to single Wi-Fi receivers and traditional deep learning models. Our multi-sensor Wi-Fi CSI and deep learning approach significantly improve HAR accuracy, generalization, and adaptability, making it an ideal solution for GNSS-denied environments in applications such as autonomous navigation and smart cities. Full article
Show Figures

Figure 1

57 pages, 8107 KiB  
Review
Machine Learning for Human Activity Recognition: State-of-the-Art Techniques and Emerging Trends
by Md Amran Hossen and Pg Emeroylariffion Abas
J. Imaging 2025, 11(3), 91; https://doi.org/10.3390/jimaging11030091 - 20 Mar 2025
Cited by 2 | Viewed by 4240
Abstract
Human activity recognition (HAR) has emerged as a transformative field with widespread applications, leveraging diverse sensor modalities to accurately identify and classify human activities. This paper provides a comprehensive review of HAR techniques, focusing on the integration of sensor-based, vision-based, and hybrid methodologies. [...] Read more.
Human activity recognition (HAR) has emerged as a transformative field with widespread applications, leveraging diverse sensor modalities to accurately identify and classify human activities. This paper provides a comprehensive review of HAR techniques, focusing on the integration of sensor-based, vision-based, and hybrid methodologies. It explores the strengths and limitations of commonly used modalities, such as RGB images/videos, depth sensors, motion capture systems, wearable devices, and emerging technologies like radar and Wi-Fi channel state information. The review also discusses traditional machine learning approaches, including supervised and unsupervised learning, alongside cutting-edge advancements in deep learning, such as convolutional and recurrent neural networks, attention mechanisms, and reinforcement learning frameworks. Despite significant progress, HAR still faces critical challenges, including handling environmental variability, ensuring model interpretability, and achieving high recognition accuracy in complex, real-world scenarios. Future research directions emphasise the need for improved multimodal sensor fusion, adaptive and personalised models, and the integration of edge computing for real-time analysis. Additionally, addressing ethical considerations, such as privacy and algorithmic fairness, remains a priority as HAR systems become more pervasive. This study highlights the evolving landscape of HAR and outlines strategies for future advancements that can enhance the reliability and applicability of HAR technologies in diverse domains. Full article
Show Figures

Figure 1

32 pages, 2840 KiB  
Article
Multi-Feature Unsupervised Domain Adaptation (M-FUDA) Applied to Cross Unaligned Domain-Specific Distributions in Device-Free Human Activity Classification
by Muhammad Hassan and Tom Kelsey
Sensors 2025, 25(6), 1876; https://doi.org/10.3390/s25061876 - 18 Mar 2025
Viewed by 881
Abstract
Human–computer interaction (HCI) drives innovation by bridging humans and technology, with human activity recognition (HAR) playing a key role. Traditional HAR systems require user cooperation and infrastructure, raising privacy concerns. In recent years, Wi-Fi devices have leveraged channel state information (CSI) to decode [...] Read more.
Human–computer interaction (HCI) drives innovation by bridging humans and technology, with human activity recognition (HAR) playing a key role. Traditional HAR systems require user cooperation and infrastructure, raising privacy concerns. In recent years, Wi-Fi devices have leveraged channel state information (CSI) to decode human movements without additional infrastructure, preserving privacy. However, these systems struggle with unseen users, new environments, and scalability, thereby limiting real-world applications. Recent research has also demonstrated that the impact of surroundings causes dissimilar variations in the channel state information at different times of the day. In this paper, we propose an unsupervised multi-source domain adaptation technique that addresses these challenges. By aligning diverse data distributions with target domain variations (e.g., new users, environments, or atmospheric conditions), the method enhances system adaptability by leveraging public datasets with varying domain samples. Experiments on three public CSI datasets using a preprocessing module to convert CSI into image-like formats demonstrate significant improvements to baseline methods with an average micro-F1 score of 81% for cross-user, 76% for cross-user and cross-environment, and 73% for cross-atmospheric tasks. The approach proves effective for scalable, device-free sensing in realistic cross-domain HAR scenarios. Full article
(This article belongs to the Special Issue Advances in Wireless Sensor and Mobile Networks)
Show Figures

Figure 1

24 pages, 3166 KiB  
Article
Human Activity Recognition Through Augmented WiFi CSI Signals by Lightweight Attention-GRU
by Hari Kang, Donghyun Kim and Kar-Ann Toh
Sensors 2025, 25(5), 1547; https://doi.org/10.3390/s25051547 - 2 Mar 2025
Cited by 1 | Viewed by 2115
Abstract
In this study, we investigate human activity recognition (HAR) using WiFi channel state information (CSI) signals, employing a single-layer gated recurrent unit (GRU) with an attention module. To overcome the limitations of existing state-of-the-art (SOTA) models, which, despite their good performance, have substantial [...] Read more.
In this study, we investigate human activity recognition (HAR) using WiFi channel state information (CSI) signals, employing a single-layer gated recurrent unit (GRU) with an attention module. To overcome the limitations of existing state-of-the-art (SOTA) models, which, despite their good performance, have substantial model sizes, we propose a lightweight model that incorporates data augmentation and pruning techniques. Our primary goal is to maintain high performance while significantly reducing model complexity. The proposed method demonstrates promising results across four different datasets, in particular achieving an accuracy of about 98.92%, outperforming an SOTA model on the ARIL dataset while reducing the model size from 252.10 M to 0.0578 M parameters. Additionally, our method achieves a reduction in computational cost from 18.06 GFLOPs to 0.01 GFLOPs for the same dataset, making it highly suitable for practical HAR applications. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

Back to TopTop