Next Article in Journal
6D Pose Estimation for Subsea Intervention in Turbid Waters
Next Article in Special Issue
Low-Power On-Chip Implementation of Enhanced SVM Algorithm for Sensors Fusion-Based Activity Classification in Lightweighted Edge Devices
Previous Article in Journal
Using Machine Learning to Detect Events on the Basis of Bengali and Banglish Facebook Posts
Previous Article in Special Issue
Extensible Chatbot Architecture Using Metamodels of Natural Language Understanding
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IQ-Data-Based WiFi Signal Classification Algorithm Using the Choi-Williams and Margenau-Hill-Spectrogram Features: A Case in Human Activity Recognition

1
Post-Doctoral Station, CRRC Corporation Limited, Beijing 100070, China
2
School of Electronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
3
College of Physics and Electronic Information, Inner Mongolia Normal University, Hohhot 010028, China
4
Shenzhen Research Institute, University of Electronic Science and Technology of China, Shenzhen 518063, China
*
Authors to whom correspondence should be addressed.
Electronics 2021, 10(19), 2368; https://doi.org/10.3390/electronics10192368
Submission received: 2 September 2021 / Revised: 22 September 2021 / Accepted: 25 September 2021 / Published: 28 September 2021
(This article belongs to the Special Issue Human Activity Recognition and Machine Learning)

Abstract

:
This paper presents a novel approach that applies WiFi-based IQ data and time–frequency images to classify human activities automatically and accurately. The proposed strategy first uses the Choi–Williams distribution transform and the Margenau–Hill spectrogram transform to obtain the time–frequency images, followed by the offset and principal component analysis (PCA) feature extraction. The offset features were extracted from the IQ data and several spectra with maximum energy values in the time domain, and the PCA features were extracted via the whole images and several image slices on them with rich unit information. Finally, a traditional supervised learning classifier was used to label various activities. With twelve-thousand experimental samples from four categories of WiFi signals, the experimental data validated our proposed method. The results showed that our method was more robust to varying image slices or PCA numbers over the measured dataset. Our method with the random forest (RF) classifier surpassed the method with alternative classifiers on classification performance and finally obtained a 91.78% average sensitivity, 91.74% average precision, 91.73% average F1-score, 97.26% average specificity, and 95.89% average accuracy.

1. Introduction

With the development of life rescue technology, especially the development of detection technology for earthquake survivors and outdoor sports victims, human activities behind obstacles such as walls and debris have become a critical direction in life detection [1]. An essential characteristic of microwaves is their weak diffraction ability and almost linear propagation. Their carrier frequency determines the ability of the microwave signal to pass through a wall. Microwaves penetrate well through concrete walls [2] with a 2–4 GHz carrier frequency. In the range of this frequency, the power is low and will not harm the human body. According to the IEEE 802.11 standard, WiFi signals use a 2.4–2.4835 GHz carrier frequency [3], so WiFi signals can pass through walls [4]. Wireless behavior recognition based on WiFi is realized by detecting the WiFi signals’ characteristics reflected by the human body [5]. Using WiFi signals to carry out nonvisual behavior recognition has substantial research and application value in life rescue.
Some specific problems limit the application of WiFi for nonvisual behavior recognition, such as cochannel interference, anisotropic wireless propagation, and data traffic jams in WiFi networks. Reference [6] showed that WiFi network interference can cause radar performance deterioration by enhancing the probability of false alarms. Besides the cochannel interference, the anisotropic wireless propagation also has adverse effects on the performance. A WiFi device with a link-centric architecture, even if the underlying devices are all equipped with omnidirectional antennas, creates an anisotropic wireless propagation environment [7]. Moreover, due to data traffic jams in WiFi networks, the beacon signal interval is challenging to manipulate [8]. The important information in reflected WiFi signals received by the spectrum/signal analyzer is easily lost. The classification performance based on weak information data is likely to be poor.

1.1. Article Contribution

To improve the classification performance based on weak information, we propose a novel classification algorithm based on time–frequency features using WiFi signals to improve classification performance in this article. Our method applies the Choi–Williams distribution [9] and the Margenau–Hill spectrogram distribution [10] time–frequency analysis to obtain the images of the signals. The classification features include offset parameters and principal component analysis (PCA) values. Our approach uses energy to obtain the central time frames of the spectra and image slices. The offset parameters are calculated from the IQ data and several spectra with maximum energy values in the time domain, and the PCA values are calculated using the whole images and several image slices on them with rich unit information. This strategy is likely to avoid the weak unit information of the whole time–frequency image because the unit information of the image slices is rich compared to that of the entire time–frequency image. Hence, our method of using the features from the entire signal is likely to boost the classification performance.

1.2. Symbols and Article Organization

In this paper, scalars are denoted by lowercase letters, e.g., x, whereas vectors are denoted by bold lowercase letters, x . Matrices are denoted by bold uppercase letters, X . Furthermore, =denotes the equal operator. ( · ) and E ( · ) denote the conjugate operator and the estimated operator, respectively.
The remainder of this paper is structured as follows. Section 2 introduces the related works. Section 3 describes the details of our method. Section 4 describes the experimental environment and the recording process of the measurement data. Our algorithm performances are illustrated with numerical results from the human activity classification in Section 5. Finally, conclusions are drawn in Section 6.

2. Related Work

In the past decade, many works in the literature have been devoted to researching the characteristics of WiFi signals. These characteristics include reconstructed images [4,11,12], channel status information (CSI) [7,13,14,15,16,17,18,19,20,21,22,23], and the amplitude and phase information in the I-axis and Q-axis of the Cartesian coordinate system (also known as IQ data) [6,24,25].
In [4], the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) proposed a transparent wall technology with a low bandwidth, low power consumption, a compact structure, and accessibility by nonmilitary entities. This technology uses the 2.4 GHz WiFi signal based on the Industrial, Scientific, and Medical band, eliminating static object reflection, including walls. The CSAIL used a radio frequency capture device to capture the wireless signal behind a wall or occlusion and reconstructed the human image by analyzing the reflected wireless signal [11]. To realize static target positioning by sensing the micromotion caused by the target’s breathing, Reference [12] proposed a multiperson positioning system based on the wireless signal in a complex environment.
Reference [7] designed a scheme with a ubiquitously deployed WiFi infrastructure and evaluated it in typical multipath-rich indoor scenarios using CSI data. In [13,14], Y. Zeng et al. applied the WiFi CSI to classify shopper status and recognize the gait of people 2–3 m away. Wang et al. used the CSI to identify human gait [15] and proposed a human activity recognition and monitoring system to quantify the relationship between human movement speed and human activity [16]. M. I. Khan et al. used the CSI to track vital signs and remove any outliers from the gathered data [17]. Yuan et al. applied the CSI to extract features for device-free human activity recognition in b5G wireless communication [18]. Sharma et al. used the CSI to train convolutional neural networks and classify human activities in different places [19]. The device-free system in [20] investigated drivers’ activities as a multiclass classification problem leveraging the CSI of the WiFi signals for better discrimination of in-vehicle activities. Reference [21] developed a human activity recognition system via the CSI, and Reference [22] extracted various time and frequency domain features via the system. Furthermore, Reference [23] recognized human activities based on environment-independent fingerprints extracted from the CSI.
Reference [24] used 2.4 GHz bistatic passive radar to detect a moving human target behind a wall and obtained the range and Doppler information. In [6], A. D. Singh et al. studied the cochannel interference between WiFi and through-wall micro-Doppler radar based on the features of indoor walking at a frequency of 2.462 GHz. Our previous work [25] classified human activity via the PCA features of the short-time Fourier transform time–frequency image of samples.

3. Methodology

This section includes three subsections. The first one introduces the time–frequency methods used in this article. The second one displays the classification features extracted from the image. The last one presents the framework of our approach.

3.1. Time-Frequency Methods

Due to the information that the reflected WiFi signal being too weak to obtain a good classification performance, we applied the Choi-Williams distribution [9] image and Margenau-Hill-Spectrogram distribution [10] image together for the feature extraction in this article.
(1)
The Choi-Williams distribution has the following expression:
CW x ( t , f ) = 2 + σ 4 π | τ | e f 2 σ 16 τ 2 x ( t + f + τ 2 ) x ( t + f τ 2 ) e i 2 π τ d f d t ,
where x denotes the input signal and t and f denote the vectors of the time instants and normalized frequencies, respectively. σ denotes the standard deviation, i.e., the square root of the variance.
(2)
The Margenau-Hill-Spectrogram distribution has the following expression:
MHS x ( t , f ) = R { K gh 1 F x ( t , f ; g ) F x ( t , f ; h ) } ,
K gh = h ( u ) g ( u ) du ,
where F x ( · ) denotes the short-time Fourier transform of x with the analysis window g .
Figure 1 and Figure 2 show the time-frequency images of a WiFi signal based on these two time–frequency analyses. The WiFi signal collected from the environment is illustrated in the next section. The time-frequency images were produced by employing the Choi-Williams transform and the Margenau-Hill-Spectrogram transform over the 256 frequency bins of the IQ data accumulated over time. The Choi-Williams transform and the Margenau-Hill-Spectrogram transform were executed via the functions tfrcw.m and tfrmhs.m of the tftb toolbox [26] with a Hamming window over the length of the signal. The Hamming window length was 57.

3.2. Classification Features

In this article, we applied offset parameters and PCA parameters as the classification features.
The offset parameters included the mean [27], standard deviation [28], variance [29], skewness [30], kurtosis [31], and central moment [32]. The mean parameter measures the central tendency of the signal probability distribution. The standard deviation is a measure of the amount of variation or dispersion of the input signal. The variance, measuring how far the signal spreads out from its average value, is the expectation of the squared deviation. The kurtosis measures the “tailedness” of the signal probability distribution. The skewness and central moment measure the asymmetry and moment of the probability distribution of the signal about its mean, respectively. For convenience, the formula of the skewness, kurtosis, and central moment can be written as follows:
s k e w n e s s = E ( x μ ) 3 σ 3 ,
k u r t o s i s = E ( x μ ) 4 σ 4 ,
m o m e n t = E ( x μ ) k ,
where μ and σ denote the mean and the standard deviation of x , respectively, and k denotes the order of the central moments.
Orthogonal linear PCA, which transforms the input signal into a new coordinate system, is often used to process the spectral information by extracting its main features and reducing the computational complexity [33,34]. The most significant variance, via some scalar projection of the signal, lies on the first coordinate, i.e., the first principal component. The second significant variance lies on the second coordinate, etc. Figure 3 shows the first 60 PCA values of all the subfigures in Figure 1 and Figure 2. In this article, the principal components were calculated by the singular-value decomposition of the whole image or image slices.

3.3. Method Framework

Due to the data traffic jams in WiFi networks and the anisotropic wireless propagation of WiFi devices, the reflected WiFi signals via the spectrum/signal analyzer are likely to lose important information, resulting in weak classification performance. To solve this problem, we propose an approach to human activity classification based on the Choi-Williams distribution and Margenau-Hill-Spectrogram distribution time–frequency images with the offset features of the whole IQ signal, as shown in Figure 4.
In our method, the first group of offset parameters was calculated directly by the transformed IQ data. The second group was calculated via the spectrum of the time frame with the maximum energy in both the Choi–Williams distribution and Margenau-Hill-Spectrogram distribution time-frequency images, and the third one came from the time frame spectrum with the second maximum energy of the images. Different unit images had the same small scale, and the unit image was a subset of a spectrogram image. As shown in Figure 1 and Figure 2, the unit information is rich on several image slices instead of the whole image. Hence, not only did we perform PCA to analyze the entire time–frequency image, but we also performed PCA to analyze the image slice with rich unit information. The central time frame selection of the image slice was the same as the time frame selection of the spectrum for the offset parameters’ calculation.

4. Experimental Environment

In the experiment, the transmitter was an ASUS ROG GT-AX11000 tri-band WiFi gaming router, and the receiver was a Tektronix RSA 306B spectrum/signal analyzer. Moreover, the data recorder was a Thinkpad X1 with the Tektronix SignalVu-PC software. The measurement data were collected in the corridor on the 10th floor of the Science and Engineering Building A at Inner Mongolia Normal University, as shown in Figure 5.
The distance between the router and receiver antennas was 10 m, with the target at the center. The heights of the router and receiver antennas were approximately 1 m, with the same centroid height as the subject. The router was operated at 2.412 GHz, with an instantaneous bandwidth of 20 MHz, satisfying the receiver antennas’ range (1.5–3.5 GHz). The experimental data were collected from four categories of signals, including idle and three different activities (the marching-in-place exercise, rope skipping, and arms rotating). There were 3000 samples in every signal category, with 12,000 samples in total. The holdout partition randomly selected the training and testing samples.

5. Results and Discussion

We applied six statistics (sensitivity, precision, F1-score, specificity, accuracy, and classification rate) to measure the classification performance in this section. Therein, the first five statistics measured every activity result, and the classification rate was for the whole performance. Assume that P and N denote the number of positive and negative samples, respectively. T P and F P denote the number of true positives and false positives, respectively. A true positive means an activity was labeled correctly, while a false positive means a false alarm, i.e., another activity was labeled as the activity under test. Furthermore, T N and F N denote the number of true negatives and false negatives, respectively. A true negative is also known as a correct rejection, while a false negative is a missed detection. These measures can be expressed as:
S e n s i t i v i t y = T P T P + F N ,
P r e c i s i o n = T P P ,
F 1 = 2 T P 2 T P + F P + F N ,
S p e c i f i c i t y = T N T N + F P ,
A c c u r a c y = T P + T N T P + F P + F N + T N ,
C l a s s i f i c a t i o n R a t e = E ( T P T P + F N ) ,
which yield
P = T P + F P ,
N = T N + F N .
First, we assessed the performance using the measured WiFi signals of recognizing different activities (the marching-in-place exercise, rope skipping, and arms rotating) and the idle condition. Every category included 3000 samples; thereby, the total number of samples was 12,000, with the scatter plot given in Figure 6. As shown in Figure 4, the extracted features in our method came from the time–frequency images, whose generation function and parameters were the same as those in the last section. Ten groups of image slices with ten PCA values each were used for the calculation. The time length of every image slice was 200 ms. Six kinds of machine-learning-based classifiers, including two kinds of K-nearest neighbors ( K = 3 or 5) [21,35], bagging [36], boosting [36,37], random forest (RF) [38,39], and support vector machine (SVM) [21,22,40], were applied for the classification. Therein, the ensemble type of the boosting classifier was AdaBoostM2. In addition, the kernel of the SVM was a two-order polynomial function with the auto-kernel scale, whereas the box constraint was set to one with true standardization. The holdout cross-validation partition ( p = 0.3 ) was used via selecting 70% (8400 samples) for learning features and the remaining 30% (3600 samples) for testing.
In Figure 7, the accuracies of all trials from the holdout partitions were the average of 100 loops. The average accuracy of NN3, NN5, bagging, boosting, SVM, and RF was 68.95%, 69.43%, 89.82%, 80.54%, 94.12%, and 95.89% respectively.
The sensitivity, precision, F1-score, specificity, and accuracy of the classifications are shown in Figure 8. Compared to the other classifiers, using the RF classifier in our method was likely to obtain the best performance in this scenario. The confusion matrix of the classification via the RF classifier is given in Table 1, with the sensitivity, accuracy, and specificity in Table 2.
To evaluate the effect of the number of image slices on the classification performance, we calculated the classification performance under different numbers of image slices. The method and parameter setting of the feature extraction and classifiers were the same as those of the last evaluation. As Figure 9 shows, with the increase of the slice number, the classification rate improved. When the image slice number was equal to zero, i.e., all the features came from the whole signal or image without the high-quality features from the image slice with rich unit information, the performance decreased. In this figure, the classification rates were 25.86% (NN3), 26.29%(NN5), 46.26% (boosting), 74.72% (bagging), 84.21% (SVM), and 85.77% (RF) without the image slice features. When the image slice number was equal to 10, the classification rates were 37.90% (NN3), 38.86% (NN5), 61.08% (boosting), 79.64% (bagging), 88.25% (SVM), and 91.78%(RF), respectively. The classification rate of the boosting classifier was boosted by 14.82%, and that of the RF classifier improved by 6.01%. Due to the classification rate of the RF classifier being higher than the classification performance of the others, the details of the classification performance of the RF classifier are analyzed in Figure 10. In this figure, the average precision, average F1-score, average specificity, and average accuracy of the four categories of WiFi signals in our method were likely to reach 91.74%, 91.73%, 97.26%, and 95.89%, respectively.
To evaluate the effect of the number of images or image slices on the classification performance, we calculated the classification performance under different numbers of image slices. The image slice number was set to 10, and the other parameter settings were the same as those of the last evaluation. The results are shown in Figure 11. In this figure, the PCA values negatively affected the classification performance via the boosting classifier while positively affecting the classification performance via the other classifiers. Moreover, the classification performances of the RF classifier were higher than those of the other classifiers. There was a 2.77% improvement of the classification rates between no PCA features (89.01%) and 10 PCA features per image or image slice via the RF classifier.

6. Conclusions

In this article, a novel approach that applied the features from the IQ data and the time–frequency images to classify human activities automatically and accurately was proposed. The two images were from the time–frequency transform of the Choi–Williams distribution and the Margenau–Hill spectrogram distribution. There were two categories of features in the presented strategy, i.e., the offset parameters and the PCA values. The offset parameters, with the mean, standard deviation, variance, skewness, kurtosis, and central moments included, were calculated by the IQ data and several spectra with maximum energy values in the time domain. The PCA values were calculated by the whole images and several image slices on them with rich unit information.
The proposed algorithm was validated on the experimental data. Our method was shown to be more robust to varying image slices or PCA numbers over the measured dataset, including three activities (the marching-in-place exercise, rope skipping, and arms rotating) and the idle signal. Experimentally, our method with the RF classifier surpassed the methods with alternative classifiers on the classification performance and finally obtained a 91.78% average sensitivity, 91.74% average precision, 91.73% average F1-score, 97.26% average specificity, and 95.89% average accuracy. Moreover, the classification results showed that with the increase of slice number and PCA number, the classification rates of our method with the RF classifier improved by 6.01% and 2.77%, respectively.
In future work, we can consider the denoising method of the WiFi signal and the method of the physical feature extraction. Under the conditions of a complex and complicated environment, various WiFi signals have mutual interference when using similar channels, and thus, the received signals are accompanied by noise. Suppressing the noise to obtain the denoising signal is likely to affect the classification performance positively. Moreover, the features of our method were offset parameters and PCA values, which means they had nothing to do with the physical features of the activities. If we need to map the features and activities together, we need to consider the problem of the physical features’ extraction.

Author Contributions

Conceptualization, Y.L.; data curation, Y.L.; formal analysis, Y.L.; funding acquisition, Y.L. and F.Y.; investigation, Y.L.; methodology, Y.L.; resources, Y.L. and F.Y.; software, Y.L.; validation, Y.L.; writing—original draft, Y.L.; writing—review and editing, Y.L. and F.Y. Both authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the CRRC corporation limited–Research on Modular technology of rail transit intelligent detection system (CIJS20-JS042-R) and the central government guiding local scientific and technological development funds–Realization of multi-input multi-output forward scattering physiological information detection radar (2021SZVUP023).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CSAILComputer Science and Artificial Intelligence Laboratory
CSIChannel status information
PCAPrincipal component analysis
RFRandom forest
SVMSupport vector machine

References

  1. Narayana, R.M. Earthquake survivor detection using life signals from radar micro-doppler. In Proceedings of the 1st International Conference on Wireless Technologies for Humanitarian Relief, Kerala, India, 18–21 December 2011; pp. 259–264. [Google Scholar]
  2. Yang, Y.; Zhang, C.; Fathy, A.E. Development and implementation of ultra-wide band see-through-wall imaging system based on sampling oscilloscope. IEEE Antennas Wirel. Propag. Lett. 2008, 7, 465–468. [Google Scholar] [CrossRef]
  3. O’Hara, B.; Petrick, A. IEEE 802.11g Higher Data Rates in 2.4 GHz Frequency Band; Wiley-IEEE Standards Association: Hoboken, NJ, USA, 2005. [Google Scholar]
  4. Adib, F.; Katabi, D. See through walls with WiFi! ACM Sigcomm Comput. Commun. Rev. 2013, 43, 75–86. [Google Scholar] [CrossRef]
  5. Lu, Y.; Lv, S.; Wang, X.; Zhou, X. A survey on WiFi based human behavior analysis technology. Chin. J. Comput. 2019, 42, 3–23. [Google Scholar]
  6. Vishwakarma, S.; Ram, S.S. Detection of multiple movers based on single channel source separation of their micro-dopplers. IEEE Trans. Aerosp. Electron. Syst. 2017, 99, 159–169. [Google Scholar] [CrossRef]
  7. Zhou, Z.; Yang, Z.; Wu, C.; Shangguan, L.; Liu, Y. Towards omnidirectional passive human detection. In Proceedings of the 2013 Proceedings IEEE INFOCOM, Turin, Italy, 14–19 April 2013; pp. 3057–3065. [Google Scholar]
  8. Shi, F.; Chetty, K.; Julier, S. Passive activity classification using just WiFi probe response signals. In Proceedings of the IEEE Radar Conference (RadarConf), Boston, MA, USA, 22–26 April 2019. [Google Scholar]
  9. Choi, H.; Williams, W. Improved time–frequency representation of multicomponent signals using exponential kernels. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 862–871. [Google Scholar] [CrossRef]
  10. Hippenstiel, R.; Oliviera, P.D. Time-varying spectral estimation using the instantaneous power spectrum (IPS). IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1752–1759. [Google Scholar] [CrossRef]
  11. Adib, F.; Hsu, C.; Mao, H.; Katabi, D.; Durand, F. Capturing the human figure through a wall. ACM Trans. Graph. 2015, 34, 1–13. [Google Scholar] [CrossRef]
  12. Adib, F.; Kabelac, Z.; Katabi, D. Multi-person localization via RF body reflections. In Proceedings of the 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’15), Oakland, CA, USA, 4–6 May 2015. [Google Scholar]
  13. Zeng, Y.; Pathak, P.H.; Mohapatra, P. Analyzing shopper’s behavior through WiFi signals. In Proceedings of the 2nd Workshop on Workshop on Physical Analytics, Florence, Italy, 22 May 2015. [Google Scholar]
  14. Zeng, Y.; Pathak, P.H.; Mohapatra, P. WiWho: WiFi-based person identification in smart spaces. In Proceedings of the ACM/IEEE International Conference on Information Processing in Sensor Networks, Vienna, Austria, 11–14 April 2016; pp. 1–12. [Google Scholar]
  15. Wang, W.; Liu, A.X.; Shahzad, M. Gait recognition using WiFi signals. In Proceedings of the ACM Ubiquitous Computing Conference, Heidelberg, Germany, 12–16 September 2016; pp. 363–373. [Google Scholar]
  16. Wang, W.; Liu, A.X.; Shahzad, M.; Ling, K.; Lu, S. Device-free human activity recognition using commercial WiFi devices. IEEE J. Sel. Areas Commun. 2017, 5, 1118–1131. [Google Scholar] [CrossRef]
  17. Khan, M.I.; Jan, M.A.; Muhammad, Y.; Do, D.T.; Rehman, A.U.; Mavromoustakis, C.X.; Pallis, E. Tracking vital signs of a patient using channel state information and machine learning for a smart healthcare system. In Neural Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  18. Yuan, H.; Yang, X.; He, A.; Li, Z.; Tian, Z. Features extraction and analysis for device-free human activity recognition based on channel statement information in b5G wireless communications. EURASIP J. Wirel. Commun. Netw. 2020, 2020, 1–10. [Google Scholar] [CrossRef]
  19. Sharma, L.; Chao, C.; Wu, S.L.; Li, M.C. High accuracy WiFi-based human activity classification system with time–frequency diagram CNN method for different places. Sensors 2021, 21, 3797. [Google Scholar] [CrossRef] [PubMed]
  20. Akhtar, Z.U.A.; Wang, H. WiFi-based driver’s activity recognition using multi-layer classification. Neurocomputing 2020, 405, 12–25. [Google Scholar] [CrossRef]
  21. Chelli, A.; Muaaz, M.; Abdelgawwad, A.A.; Pätzold, M. Human activity recognition using Wi-Fi and machine learning. In Innovative and Intelligent Technology-Based Services for Smart Environments–Smart Sensing and Artificial Intelligence; CRC Press: Boca Raton, FL, USA, 2021; pp. 75–80. [Google Scholar]
  22. Muaaz, M.; Chelli, A.; Abdelgawwad, A.A.; Mallofré, A.C.; Pätzold, M. WiWeHAR: Multimodal human activity recognition using Wi-Fi and wearable sensing modalities. IEEE Access 2020, 8, 164453–164470. [Google Scholar] [CrossRef]
  23. Muaaz, M.; Chelli, A.; Gerdes, M.; Pätzold, M. Wi-Sense: A passive human activity recognition system using Wi-Fi and convolutional neural network and its integration in health information systems. In Annals of Telecommunications; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–13. [Google Scholar]
  24. Chetty, K.; Graeme, E.S.; Woodbridge, K. Through-the-wall sensing of personnel using passive bistatic WiFi radar at standoff distances. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1218–1226. [Google Scholar] [CrossRef]
  25. Lin, Y. The Short-Time Fourier Transform based WiFi Human Activity Classifification Algorithm. In Proceedings of the International Conference on Computational Intelligence and Security(CIS), Chengdu, China, 19–22 November 2021. [Google Scholar]
  26. Tftb-Info. Available online: http://tftb.nongnu.org/ (accessed on 16 September 2021).
  27. Dobrini, D.; Gaparovi, M.; Medak, D. Sentinel-1 and 2 time-series for vegetation mapping using random forest classification: A case study of Northern Croatia. Remote Sens. 2021, 13, 2321. [Google Scholar] [CrossRef]
  28. Ghaffar, M.; Khan, U.S.; Iqbal, J.; Rashid, N.; Izhar, U. Improving classification performance of four class fnirs-bci using mel frequency cepstral coefficients. Infrared Phys. Technol. 2020, 112, 103589. [Google Scholar] [CrossRef]
  29. Asfour, M.; Menon, C.; Jiang, X. A machine learning processing pipeline for relikely hand gesture classification of FMG signals with stochastic variance. Sensors 2021, 21, 1504. [Google Scholar] [CrossRef] [PubMed]
  30. Lin, Y.; Kernec, J.L.; Yang, S.; Fioranelli, F.; Romain, O.; Zhao, Z. Human activity classification with radar: Optimization and noise robustness with iterative convolutional neural networks followed with random forests. IEEE Sens. J. 2018, 18, 9669–9681. [Google Scholar] [CrossRef] [Green Version]
  31. Permanasasi, Y.; Falah, A.N.; Marethi, I.; Ruchjana, B.N. PCA and projection pursuits on high dimensional data reduction. J. Phys. Conf. Ser. 2021, 1722, 012087. [Google Scholar] [CrossRef]
  32. Clarenz, U.; Rumpf, M.; Telea, A. Robust feature detection and local classification for surfaces based on moment analysis. IEEE Trans. Vis. Comput. Graph. 2004, 10, 516–524. [Google Scholar] [CrossRef] [Green Version]
  33. Qing, Y.; Liu, W. Hyperspectral image classification based on multi-scale residual network with attention mechanism. Remote Sens. 2021, 13, 335. [Google Scholar] [CrossRef]
  34. Gao, L.; Hong, D.; Yao, J.; Zhang, B.; Gamba, P.; Chanussot, J. Spectral superresolution of multispectral imagery with joint sparse and low-rank learning. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2269–2280. [Google Scholar] [CrossRef]
  35. Bettini, C.; Civitarese, G.; Presotto, R. Personalized semi-supervised federated learning for human activity recognition. arXiv 2021, arXiv:2104.08094. [Google Scholar]
  36. Subasi, A.; Fllatah, A.; Alzobidi, K.; Brahimi, T.; Sarirete, A. Smartphone-based human activity recognition using bagging and boosting. Procedia Comput. Sci. 2019, 163, 54–61. [Google Scholar] [CrossRef]
  37. Tarafdara, P.; Bose, I. Recognition of human activities for wellness management using a smartphone and a smartwatch: A boosting approach. Decis. Support Syst. 2021, 140, 113426. [Google Scholar] [CrossRef]
  38. Gajowniczek, K.; Grzegorczyk, I.; Ząbkowski, T.; Bajaj, C. Weighted random forests to improve arrhythmia classification. Electronics 2020, 9, 99. [Google Scholar] [CrossRef] [Green Version]
  39. Mostafa, M.; Chamaani, S. Unobtrusive Human Activity Classification Based on Combined Time-Range and Time-Requency Domain Signatures Using Ultrawideband Radar. IET Signal Process. 2021, 15, 543–561. [Google Scholar] [CrossRef]
  40. Gonçalves, P.J.; Lourenço, B.; Santos, S.; Barlogis, R.; Misson, A. Computer vision intelligent approaches to extract human pose and its activity from image sequences. Electronics 2020, 9, 159. [Google Scholar] [CrossRef] [Green Version]
Figure 1. This figure displays the Choi-Williams distribution time-frequency images of the measured WiFi signal regarding (a) the marching-in-place exercise, (b) rope skipping, (c) arms rotating, and (d) idle.
Figure 1. This figure displays the Choi-Williams distribution time-frequency images of the measured WiFi signal regarding (a) the marching-in-place exercise, (b) rope skipping, (c) arms rotating, and (d) idle.
Electronics 10 02368 g001
Figure 2. This figure displays the Margenau-Hill-Spectrogram distribution time-frequency images of the measured WiFi signal regarding (a) the marching-in-place exercise, (b) rope skipping, (c) arms rotating, and (d) idle.
Figure 2. This figure displays the Margenau-Hill-Spectrogram distribution time-frequency images of the measured WiFi signal regarding (a) the marching-in-place exercise, (b) rope skipping, (c) arms rotating, and (d) idle.
Electronics 10 02368 g002
Figure 3. This figure displays the first 60 PCA values of the subfigures in Figure 1 and Figure 2.
Figure 3. This figure displays the first 60 PCA values of the subfigures in Figure 1 and Figure 2.
Electronics 10 02368 g003
Figure 4. This figure shows the structure of our method.
Figure 4. This figure shows the structure of our method.
Electronics 10 02368 g004
Figure 5. This figure shows the experimental environment and the equipment in the data collection.
Figure 5. This figure shows the experimental environment and the equipment in the data collection.
Electronics 10 02368 g005
Figure 6. This figure shows the scatter plot of the four categories of samples.
Figure 6. This figure shows the scatter plot of the four categories of samples.
Electronics 10 02368 g006
Figure 7. This figure shows the average accuracy of the four categories of WiFi signals.
Figure 7. This figure shows the average accuracy of the four categories of WiFi signals.
Electronics 10 02368 g007
Figure 8. These figures display the sensitivity, precision, F1-score, specificity, and accuracy of (a) the marching-in-place exercise, (b) rope skipping, (c) arms rotating, and (d) idle WiFi signals.
Figure 8. These figures display the sensitivity, precision, F1-score, specificity, and accuracy of (a) the marching-in-place exercise, (b) rope skipping, (c) arms rotating, and (d) idle WiFi signals.
Electronics 10 02368 g008
Figure 9. This figure demonstrates the classification rate of the four categories of WiFi signals based on the various image slice numbers via different classifiers.
Figure 9. This figure demonstrates the classification rate of the four categories of WiFi signals based on the various image slice numbers via different classifiers.
Electronics 10 02368 g009
Figure 10. These figures illustrate the comparison of the classification (a) sensitivity, (b) precision, (c) F1-score, (d) specificity, and (e) accuracy of the four categories of WiFi signals by the RF classifier with or without the slice features.
Figure 10. These figures illustrate the comparison of the classification (a) sensitivity, (b) precision, (c) F1-score, (d) specificity, and (e) accuracy of the four categories of WiFi signals by the RF classifier with or without the slice features.
Electronics 10 02368 g010
Figure 11. This figure demonstrates the function of the PCA number per image or image slice and the classification rate of the four categories of WiFi signals under the condition of various classifiers.
Figure 11. This figure demonstrates the function of the PCA number per image or image slice and the classification rate of the four categories of WiFi signals under the condition of various classifiers.
Electronics 10 02368 g011
Table 1. The confusion matrix of the classification via the RF.
Table 1. The confusion matrix of the classification via the RF.
Marching-in-PlaceRope SkippingArms RotatingIdle
Marching-in-place8345528
Rope skipping937385811
Arms rotating2408572
Idle6181875
Table 2. The sensitivity, precision, F1-score, specificity and accuracy of the classification via the RF.
Table 2. The sensitivity, precision, F1-score, specificity and accuracy of the classification via the RF.
Marching-in-PlaceRope SkippingArms RotatingIdle
Sensitivity92.65%82.00%95.22%97.25%
Precision89.24%86.71%93.40%97.61%
F1-Score90.91%84.28%94.30%97.42%
Specificity96.27%95.81%97.76%99.20%
Accuracy95.37%92.35%97.12%98.71%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lin, Y.; Yang, F. IQ-Data-Based WiFi Signal Classification Algorithm Using the Choi-Williams and Margenau-Hill-Spectrogram Features: A Case in Human Activity Recognition. Electronics 2021, 10, 2368. https://doi.org/10.3390/electronics10192368

AMA Style

Lin Y, Yang F. IQ-Data-Based WiFi Signal Classification Algorithm Using the Choi-Williams and Margenau-Hill-Spectrogram Features: A Case in Human Activity Recognition. Electronics. 2021; 10(19):2368. https://doi.org/10.3390/electronics10192368

Chicago/Turabian Style

Lin, Yier, and Fan Yang. 2021. "IQ-Data-Based WiFi Signal Classification Algorithm Using the Choi-Williams and Margenau-Hill-Spectrogram Features: A Case in Human Activity Recognition" Electronics 10, no. 19: 2368. https://doi.org/10.3390/electronics10192368

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop