Next Article in Journal
Detecting Data Anomalies from Their Formal Specifications: A Case Study in IoT Systems
Next Article in Special Issue
An Adaptive Hybrid Automatic Repeat Request (A-HARQ) Scheme Based on Reinforcement Learning
Previous Article in Journal
Harmonic Distortion Aspects in Upper Limb Swings during Gait in Parkinson’s Disease
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring LoRa and Deep Learning-Based Wireless Activity Recognition

1
School of Computer Science, University of South China, Hengyang 421200, China
2
Wanxiang Technology, Hengyang 421001, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(3), 629; https://doi.org/10.3390/electronics12030629
Submission received: 2 January 2023 / Revised: 24 January 2023 / Accepted: 26 January 2023 / Published: 27 January 2023
(This article belongs to the Special Issue Artificial Intelligence for Wireless Networks)

Abstract

:
Today’s wireless activity recognition research still needs to be practical, mainly due to the limited sensing range and weak through-wall effect of the current wireless activity recognition based on Wi-Fi, RFID (Radio Frequency Identification, RFID), etc. Although some recent research has demonstrated that LoRa can be used for long-range and wide-range wireless sensing, no pertinent studies have been conducted on LoRa-based wireless activity recognition. This paper proposes applying long-range LoRa wireless communication technology to contactless wide-range wireless activity recognition. We propose LoRa and deep learning for contactless indoor activity recognition for the first time and propose a more lightweight improved TPN (Transformation Prediction Network, TPN) backbone network. At the same time, using only two features of the LoRa signal amplitude and phase as the input of the model, the experimental results demonstrate that the effect is better than using the original signal directly. The recognition accuracy reaches 97%, which also demonstrate that the LoRa wireless communication technology can be used for wide-range activity recognition, and the recognition accuracy can meet the needs of engineering applications.

1. Introduction

With the large-scale application of IoT technology, numerous wireless signals have been explored for contactless sensing in recent years, and wireless recognition has attracted much attention. Wireless activity recognition does not depend on specific hardware. The existing wireless radio frequency signal can achieve contactless user activity recognition, which is conducive to large-scale deployment and effectively avoids personal privacy leakage.
There has been extensive research based on wireless recognition. Some researchers have researched sleep sound recognition and breath detection based on RFID (Radio Frequency Identification, RFID) [1,2,3]. Wi-Fi technology is a widely used LAN wireless communication technology with an extensive user base. Most current research based on Wi-Fi wireless signals uses CSI (Channel Statement Information, CSI) data [4]. The leading applications are breath detection, multi-person tracking, and finger trajectory tracking [5,6,7,8,9,10,11,12]. FMCW (Frequency-modulated continuous-wave, FMCW) radar has the advantages of easy implementation, simple structure, and low cost. It has been widely used in both civil and military fields, and some researchers have proposed breath detection and motion tracking based on the FMCW radar [13,14]. The above RFID technologies require tags to be deployed in the environment when used and are not suitable for activity recognition in the home environment. Wireless sensing technologies based on Wi-Fi signals have problems such as poor wall penetration ability and occupy communication resources. At the same time, the high hardware cost of FMCW is not particularly suitable for activity recognition in the home environment. More importantly, all the above wireless sensing technologies have a limited sensing distance (as shown in Table 1).
LoRa is a long-range communication technology based on CSS (Chirp Spread Spectrum, CSS) modulation, which extends the range of the traditional wireless RF communication by 3–5 times with the same power consumption (as shown in Table 1). Furthermore, the existing wireless sensing solutions require at least one of the transceivers to be close to the target, which may not be feasible in specific scenarios. LoRa is a long-range communication technology based on CSS (Chirp Spread Spectrum, CSS) modulation, which extends the range of traditional wireless RF communication by 3–5 times with the same power consumption.
In this paper, we propose to solve the problems of short sensing distance and poor wall penetration of current wireless activity recognition by LoRa’s reasonable sensing distance and wall penetration ability to increase the ubiquity of wireless activity recognition. We analyzed the LoRa signal reflected from the target, as shown in Figure 1: the amplitude signals of both walking and picking activities, the closer the target is to the signal transceiver, the higher the amplitude of the received signal; the faster the activity speed, the higher the frequency of the received signal amplitude. These different waveforms allow us to extract information about the activity characteristics from the signal at the receiving end. This experiment builds a LoRa-based wireless activity recognition test environment and establishes data samples for six activities, including standing, walking, jogging, squatting, picking, and emptying. According to the previous work [18], two receiver antennas are set up in this experiment to eliminate the effect of the baseband signal by the ratio of the received data from the two antennas.
Neural networks are widely used in activity recognition research [19,20,21]; however, most of the neural networks used are deep, complex, and computationally intensive. Moreover, their large model sizes and complex structures implemented in devices with limited hardware resources are challenging. Furthermore, for most of the temporal classification problems using LSTM (Long Short-Term Memory, LSTM) networks [22], the parallelism of LSTM is less effective, and the model is time-consuming. Therefore, for the study of wireless activity recognition, there is a need to develop highly accurate models with lightweight architecture and reasonable computational costs. At the same time, LoRa is an edge device with associated resource constraints; LoRa activity recognition requires a more lightweight model. In this paper, we propose to use the TPN [23] backbone network for activity recognition. The TPN backbone network has a simple structure and good scalability, which is suitable for applications in the field of activity recognition. Moreover, we use only two features, amplitude and phase, as inputs to the model. For the characteristics of long length and high spatio-temporal correlation of the LoRa activity data, we introduce the ECA-Net (Efficient Channel Attention, ECA) [24] attention module to improve the TPN backbone network. The attention mechanism module avoids degradation and allows for cross-channel interaction. In contrast, the module structure is lightweight, ensuring the original efficiency of the TPN backbone network while also improving the overall network effectiveness.
To verify the effectiveness of the proposed improved TPN (Transformation Prediction Network, TPN) backbone network, we conduct the training validation on deep learning models such as GRU (Gate Recurrent Unit, GRU), LSTM, and several standard machine learning models (KNN, SVM, and Decision Tree). The experimental results demonstrate that the test accuracy of the proposed improved TPN backbone network reaches 97%. Furthermore, the comparison experiments also verify that the proposed method outperforms deep learning models such as GRU, LSTM, and traditional machine learning models. The proposed method indicates that the LoRa wireless communication technology can be used for wide-range activity recognition, and that the recognition accuracy can meet the requirements of engineering applications.
We summarize the main works of this paper as follows:
  • This is the first time that loRa and deep learning are used to achieve contactless indoor activity recognition, and a more lightweight and improved version of the TPN backbone network model is proposed;
  • We propose to use two features of the LoRa signal, amplitude and phase, as the inputs of the model, and experimentally find that it works better than using the original signal directly, and the recognition accuracy reaches 97%.

2. Related Work

The related work in wireless activity recognition can be divided into two main categories: Radar-based activity recognition, and RF signal-based activity recognition.
Radar-based activity recognition: Ultra-wideband (UWB) pulse radar. UWB pulse radar works by first sending a train of pulses in the direction of the target, after which the received signal can be observed in the frequency domain. Ref. [25] investigate using the ultra-wideband (UWB) Doppler radar in order to identify the daily life activities in smart homes. However, the enormous bandwidth of UWB raises the hardware requirements and system complexity. Frequency-modulated continuous-wave (FMCW) radar. FMCW radar radiates continuous transmission power but linearly increases the operating frequency of the transmitted signal during the measurement within a wide bandwidth. By comparing the frequency of the received signal bounced off the human body to the transmission signal, the FMCW radar can directly measure the distance of the reflection body from the device. Ref. [26] detected fall events using micro-Doppler signatures exploiting the frequency-modulated continuous-wave (FMCW) radar. However, FMCW hardware costs are typically much higher, making these solutions less practical for everyday home use.
RF signal-based activity recognition: Wi-Fi has a large user base, easy deployment, and cheap hardware. Ref. [6] develop a device-free fitness assistant system for home/office scenarios by utilizing the existing Wi-Fi infrastructure without active user participation. The system can differentiate individuals to enable personalized fitness assistance with comprehensive workout analysis and competent workout assessment. Ref. [7] develop a CSI-ratio model that establishes the relationship between human movement and CSI ratio changes, laying the foundation to guide fine-grained sensing. Ref. [8] propose a precise sensing boundary determination method called WiBorder, which takes advantage of common walls in our daily lives. Moreover, [9] achieve multi-person breath perception. Most current studies on Wi-Fi mainly use CSI data [5,6,7,8,9] as the input to the model. However, the Wi-Fi signal penetration is poor, and long-distance wireless activity recognition is easily affected by obstacle occlusion. RFID technology has the characteristics of openness, easy recognition, and good scalability. Ref. [1] presents a non-intrusive automatic user identification and authentication system through human motions captured from their daily activities based on RFID. Ref. [2] propose the concept of the two-layer sensing based on RFID that employs respiration sensing information as the basic first-layer information, which is applied to obtain a further rich second-layer sensing information, including a snore, cough and somniloquy. Lungtrack [3] uses multiple RFID tags to solve the possible ’dead zone’ problem of Wi-Fi-based breath sensing and can detect two people simultaneously. However, the deployment of electronic tags is inconvenient, and the perception range needs to be more significant.
An effective sensing range, deployment difficulty, and signal penetration through walls are essential for measuring wireless sensing systems. LoRa has an outstanding sensing range and wall penetration, easy deployment, and inexpensive hardware, and it is widely used in long-range IoT communication. For example, Michele Luvisotto et al. applied LoRa to indoor industrial monitoring [27] by setting up multiple nodes indoors to form a LoRa signal communication-based network to monitor indoor scenes. Ref. [28] successfully demonstrates the advantages of LoRa for pipeline monitoring applications. Ref. [29] presents an innovative, power-efficient, and highly scalable IoT agricultural system based on the LoRaWAN network for long range and low power consumption data transmission from the sensor nodes to the cloud services. Ref. [30] deal with the LoRa-based application performance in outdoor scenarios; they implemented a module to study the performance of a LoRa-based IoT network in a typical urban scenario. The simulation results demonstrate that a LoRa network can scale well, achieving packet success rates above 95% in the presence of end devices in the order of 104.
Moreover, Ref. [31] aims to study the usability of the Long Range (LoRa) Wide Area Network (LoRaWAN) protocol in the context of vehicular networks. In addition, the results demonstrate the robustness of LoRaWAN in case of the transmissions taking place in motion, with limited signal degradation in the case of the highest speed values. LoRa enables low-power and long-range, with a wide-area sensing without the need for tags. Some experimental work has demonstrated that LoRa can be applied to wireless sensing [18,32]. In summary, we propose wireless activity recognition based on LoRa signals.

3. Related Theories

LoRa is a low-power local area network wireless communication standard developed by Semtech, which adopts spread spectrum technology and can rely on the spread spectrum to obtain a processing gain. The LoRa signal reaches the receiver via multipath propagation from the transmitter. The active target affects the signal during the propagation, causing the signal to change accordingly in amplitude, phase, and frequency, providing a scientific basis for the LoRa wireless activity identification. In the LoRa wireless activity recognition experimental scenario, the variation of LoRa signals in terms of the amplitude for both walking and picking activities is shown in Figure 1: It can be observed that the signal of the walking activity has a more significant variation in amplitude than that of the picking activity.
LoRa uses the chirped spread spectrum (CSS) modulation, also called linear frequency modulation (LFM). In one cycle, the frequency of the signal increases linearly with time, and the signal can be expressed as follows:
f = f c + k t
where f c represents the carrier frequency, k represents the rate of change of linear FM, k = B / T , B is the sweep bandwidth, and T is the time of one sweep cycle. The complex frequency domain expression for the LFM (linear frequency modulation, LFM) signal is as follows:
S ( t ) = e x p j 2 π f c t + j π k t 2
The wireless signal goes from the transmitter to the receiver through N different paths (both direct paths and multiple reflected paths from surrounding objects). Assuming that the propagation delay of the nth path is τ n ( t ) , the signal received by a single antenna can be expressed as follows [15]:
R x ( t ) = e j ( π k t 2 + θ c + θ s ) n = 1 N a n ( t ) e j 2 π f c τ n ( t )
where f c is the center frequency, a n ( t ) represents the attenuation coefficient of the nth path, τ n ( t ) is the time delay of the nth path, θ c represents the carrier frequency offset of the signal, θ s represents the sampling frequency offset of the signal, and e j 2 π f c τ n ( t ) is the phase change of the nth path. According to the above analysis, the term e j ( π k t 2 + θ c + θ s ) in Equation (3) represents the amount of signal variation caused by the baseband signal. The term n = 1 N a n ( t ) e j 2 π f c τ n ( t ) represents the amount of signal variation caused by the multipath signal. Therefore, the multipath signal can be further divided into two components, the static path and dynamic path, with H s = i P s a i e j 2 π f c τ i ( t ) denoting the static component; the dynamic component consists of reflections caused by moving objects, and is denoted by H d = a ( t ) e j 2 π f c τ ( t ) .
The dynamic signal component can quantitatively analyze the effect of human activity on the wireless RF signal, and in this paper, referring to the method proposed in the literature [32], the signal ratio of the two receiving antennas can be eliminated from the baseband signal. The expression of the signal ratio is as follows:
S R ( t ) = R x 1 ( t ) R x 2 ( t ) = H s 1 + a 1 ( t ) e j 2 π d ( t ) λ H s 2 + a 2 ( t ) e j 2 π ( d ( t ) + Δ s ) λ
where R x 1 ( t ) and R x 2 ( t ) represent the signals on the two received antennas, respectively. H s 1 and H s 2 represent the static components of the two received signals, respectively. a 1 ( t ) and a 2 ( t ) are the attenuation coefficients of the dynamic components in the signals of the two antennas; Δ s is the distance between the two receiving antennas and is much smaller than the signal path length d ( t ) .
It is discovered that the dynamic path modification’s effect on the phase change of the signal’s dynamic component can be described as η = e j 2 π d ( t ) λ . Let a = a 1 ( t ) , b = a 2 ( t ) e j 2 π Δ s λ ; then, the signal at the final receiving end can be expressed as follows:
S R ( η ) = H s 1 + a η H s 2 + b η
From Equation (5), it can be observed that the signal ratio is a fractional linear transformation of the original signal about η . The data obtained by the signal ratio eliminates the effect of the baseband.

4. Methods

The proposed LoRa and deep learning based activity recognition framework is shown in Figure 2: it contains a data acquisition module, a data processing module, a feature extraction module, and an activity classification module. The received data first goes through the data processing module to remove any particular noise. Finally, the classification module will identify all the different activity data to obtain the final result.

4.1. Data Pre-Processing and Feature Extraction

As an example, the signal amplitude received by the two receiving antennas is shown in Figure 3. Dual receiving antennas and transmitting antennas are placed parallel in the same direction. One antenna receives a stronger signal strength than the other, and the signal amplitude received by antenna A in the figure is significantly more potent than that of antenna B.
The effect of the background noise is reduced by the method described in Section 2 of this paper. After obtaining the signal ratio for the two received antenna signals shown in Figure 4, the results are shown in Figure 4, where it can be observed that most of the background noise has been eliminated and the active body waveform is visible.
For the improved TPN backbone network proposed in this paper, we want to use as few feature sizes as possible to achieve higher model accuracy. We compared two common signal characteristics and data statistics (raw data, amplitude, phase, mean absolute deviation, variance, and first-order difference) as inputs to the improved TPN backbone network. As in Table 2, the highest model accuracy is achieved when the features are only amplitude and phase. The precision represents the precision of the improved TPN backbone network.
Unlike most end-to-end deep learning methods that use raw data processing directly, this experiment extracts the amplitude and phase of the LoRa signal as the input to the model. The amplitude and phase features are extracted for the signal shown in Figure 4, and the results are shown in Figure 5a.
To further remove the noise from the data, the S-G filter (Savitzky-Golay Filter) was used to perform a smoothing filtering operation on the data with the background effects removed. The filtered data are shown in Figure 5b. As can be observed in the figure, the signal is more apparent in Figure 5b compared to Figure 5a. According to Nyquist’s sampling theorem, the signal can be restored without distortion when the sampling frequency equals twice the baseband signal frequency. However, the human motion frequency is around 0.1 Hz to 0.33 Hz, which is much lower than the signal frequency; thus, the sampling frequency can take a smaller value to reduce the amount of data when obtaining the waveform of human activity on the wireless signal. Based on the previous work [18,32], we set the sampling rate to 900 kHz, considering the amount of data to be processed. In addition, the experiments are configured with a Head file data length of 5 × 10 6 data points, with 2 × 5 × 10 6 data points for one sample from both amplitude and phase dimensions. To make the sample signal waveform smoother and also to improve the training efficiency of the model, the data is compressed into the form of 2500 × 2 by finding the mean value once for every 2000 data points; thus, the format of the data is the number of samples × data points × the number of features.
Figure 6 is a plot of the data waveform after averaging every 2000 points. After a series of processing, the active signal data is much smoother, removing most of the noise while retaining the characteristic waveform of the activity.

4.2. The Proposed Classification Deep Learning Module

Most temporal classification tasks use LSTM models [33], but even after a series of operations such as data smoothing and filtering, our data length is still up to 2500. LSTM is still unsuitable for processing data with such a long data length. It also has a poor parallel effect, meaning the calculation will be relatively time-consuming if the network is deep. LoRa is an edge device with limited CPU, memory, and other related resources. The high complexity model is unsuitable for LoRa-based activity recognition research, and LoRa activity recognition needs a more lightweight model.
We propose to use the TPN backbone network as an activity classification model. TPN backbone network has a lightweight structure and has achieved good results in many current sensor-based activity recognition studies [22,23,34]. As illustrated in Figure 7, The TPN backbone network starts with three identical structures containing a 1D convolutional layer and a ReLU, followed by a dropout layer. The Global max pooling layer is the next, followed by the fully connected layer for classification.
In order to better utilize the feature information of LoRa signals at spatial and temporal scales and combine the characteristics of the TPN backbone networks, we introduce the ECA-Net (Efficient Channel Attention, ECA) [24] attention module. Given the aggregated features obtained by global average pooling, ECA generates channel weights by performing a fast 1D convolution of size k, where k is adaptively determined via a mapping of the channel dimension C. Furthermore, a sigmoid activation function is used after the one-dimensional convolutional layer. We insert ECA-Net after the last convolutional structure of the TPN backbone network as a way to improve the stability of the TPN backbone network. ECA-Net can avoid data degradation, enable cross-channel interaction, and have a lightweight structure, ensuring the original efficiency of the TPN backbone network while improving the overall network effectiveness.
As shown in Figure 8, the improved TPN backbone network contains three 1D convolutional layers consisting of 16, 32, and 64 feature maps with kernel sizes of 48, 32, and 16, respectively, and has a stride of 1. Dropout is used after each of the layers with a rate of 0.1. ECA-Net is inserted after three 1D convolutional layers, with global average pooling followed by a fast 1D convolution layer consisting of one feature map with a kernel size of k, where k is adaptively determined via a mapping of channel dimension C. Moreover, a sigmoid activation function is used after the one-dimensional convolutional layer. Global max pooling is used after the ECA-Net to aggregate high-level discriminative features. Moreover, the output layer comprises a fully-connected layer of 64 hidden units followed by normalization with an std of 0.01 for classification. We use ReLU as non-linearity in all the convolutional layers (except the output and ECA-Net) and train a network with the Adam optimizer [23] for a maximum of 500 epochs, with a learning rate of 0.0003 unless stated otherwise. All model parameters were based on the TPN backbone network and obtained by experimental tuning.

5. Experiment

This section first introduces the software environment and hardware environment of the experiment, the data acquisition process and data set of this experiment, conducts the experiment, and evaluates the results according to different situations.

5.1. Experimental Environment

The main equipment of the LoRa wireless signal transceiver experimental platform is shown in Figure 9, which contains a signal transmitter, a signal receiver, and a back-end processing computer. The signal transmitter is composed of the Arduino Uno R3 [35], SX1276 LoRa node and a one directional transmitting antenna; the signal receiver is composed of USRP (Universal Software Radio Peripheral) B210 [36] and two directional receiving antennas; the signal receiver is connected to the back-end processing computer and runs Gnuradio [37] software on the computer to receive data from USRP.
The LoRa modulation employs different types of physical layer packets, with different lengths in time, parametrized by the so-called Spreading Factor (SF), which can take values SF from 7 to 12. The higher the SF, the longer the packet will last, and its reception will be more reliable. The coding rate (CR) is the percentage of the applicable information portion of the data stream after the sampling, quantization, and coding of the analog signal is completed. The LoRa device in this experiment set SF to 7 and CR to 4/5 by default. According to the LoRaWan protocol, the number of channels in the frequency band of 915 MHz with a bandwidth of 125 kHz is 64, and due to the legal requirements in China, the duty cycle of the band needs to be <1%, so we take a duty cycle of 1%. Due to the operating characteristics of LoRa, it is suitable for applications that do not require continuous packet transmission or harsh environments and cannot cover mobile signals. Since our experiments do not need to focus on the packet content, our packet content is the default ’hello’ character, and our LoRa signal transmitter sends packets at an interval of 1 s.
The transmitter and receiver antenna are placed as shown in Figure 10b, 80 cm apart and side by side in the same direction, and two File Sink modules are specified in the Gnuradio software to save the data received by the two receiver antennas. The signal transmitter and receiver are configured as follows: the LoRa signal center frequency is set to 915 MHz, the bandwidth is 125 KHz, the antenna’s band range is 850–960 MHz, and the gain is 6 dBi.
The back-end processing computer receives USRP data through Gnuradio software, which mainly configures the flow chart of the receiving end and configures parameters such as sampling frequency, center frequency, signal bandwidth, etc. Figure 11 shows the configured receiving flow chart.
The hardware configuration for model training, validation, and testing are: Intel Core i9-10900KF+64RAM; GPU is the NVIDIA GeForce GTX 3090 graphics card; OS ubuntu 20.04.3 LTS; code environment Python 3.8.8 + Cuda 11.2 + Pytorch 1.8.2.

5.2. Data Set

The volunteers for this experiment were eight people aged between 22 and 26 years: six men and two women. We set up six activity scenarios: standing, walking, jogging, squatting, picking up, and emptying. All activities were collected and completed in the same laboratory, and the laboratory layout and experimental equipment deployment are shown in Figure 10. The test rules for the six activities are as follows: (1) Two activities, walking and jogging, were tested by volunteers along the path shown by the dotted line between the two circles in Figure 10a; (2) for the three activities of squatting, picking, and standing, volunteers stood at the triangular position (4 m from the transmitting antenna) in Figure 10a for the test.; (3) empty activity is the reference data in the absence of testers. Figure 12 shows the waveforms of the wireless signal amplitude and phase data under different activity scenarios.
The experiment required collecting 2392 data samples, each containing one of the six specified activities. Table 3 lists the quantity and percentage of each activity in the data set.
In this experiment, the training set and the test set are divided in the ratio of 8:2 for the data set during the training process.

5.3. Comparison of Training Effect of Improved TPN Backbone Network with Raw Data and Feature Extraction Data

We compare the training under two data feature methods: the first uses the I/Q values of the original signal data as data features, and the second extracts the signal amplitude and phase from the original signal data as data features. The training results of the improved TPN backbone network are shown under two different data feature methods in Figure 13. Atest metrics represent the amplitude and phase as data features, and Btest metrics represent the I/Q values of the original data as data features.
It can be observed from Figure 13 that the model training results with amplitude/phase features are better than those with I/Q features. The information on accuracy, recall, F1-score, and precision on the test set is shown in Table 4 and Figure 14.

5.4. Comparison of the Training Effect of the Improved TPN Backbone Network with Other Models

To further validate the effectiveness of the proposed improved TPN backbone network. Deep learning models such as GRU, LSTM, BiLSTM, DeepConvLSTM, and several common machine learning models, such as KNN, SVM, and decision trees, were also trained and tested. The accuracy of the deep learning and machine learning models on the test set for this experimental comparison is shown in Table 5. Moreover, the primary metrics of the comparative experimental model are shown in Figure 15. The activity recognition results of the improved TPN backbone network were optimal. Moreover, the accuracy values of the improved TPN backbone network and TPN backbone network were more outstanding than 0.95 in the activity recognition process, indicating a good classification performance for all six activities. Furthermore, the activity recognition accuracy of the improved TPN backbone network reaches 0.97, which has higher activity recognition accuracy than the TPN backbone network and shows fewer volatility results than the TPN backbone network in multiple experiments. It demonstrates that the improved TPN backbone network has a better classification performance.
Relative to the comparison model in this experiment, the improved TPN backbone network adds an ECA attention module based on the TPN backbone network, improving activity recognition accuracy while safeguarding the model’s lightweight characteristics. The accuracies of the above 9 models on the test set are shown in Table 5. Among them, the improved TPN backbone network gave the best results, achieving 97% test accuracy.

6. Conclusions

In this paper, we have fruitfully explored LoRa wireless activity recognition. We build a LoRa wireless activity test environment and propose a lightweight and improved TPN backbone network suitable for LoRa devices. Unlike most direct use of raw data as the model’s input, we propose using the amplitude and phase as the model’s input to improve the recognition effect without increasing the data complexity. A total of 2392 data samples collected for six activities were trained and validated. The test accuracy reached 97%, which is significantly better than the effect of traditional machine learning and deep learning models such as GRU. In the future, we will continue to expand the LoRa wireless activity recognition to scenarios where multiple people coexist and study the recognition of different people’s activities.

Author Contributions

Conceptualization, Y.X. and M.N.; methodology, Y.X.; software, Y.X.; validation, Y.X. and M.N.; formal analysis, Y.X. and M.N.; investigation, Y.X. and M.N.; resources, M.N., T.Z. and Z.L.; data curation, Y.X.; writing—original draft preparation, Y.X.; writing—review and editing, T.Z., M.N. and Y.X.; visualization, Y.X.; supervision, M.N., T.Z. and Z.L.; funding acquisition, M.N., T.Z., Y.C. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partly supported by the National Natural Science Foundation of China under Grant No. 62006110, the Natural Science Foundation of Hunan Province under Grant No. 2021JJ30574, the Research Foundation of Education Bureau of Hunan Province under Grant No. 21B0424 and is partly supported by the Hengyang Science and Technology Major Project under Grant No. 202250015428.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The article contains the data which are also available from the corresponding authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huang, A.; Wang, D.; Zhao, R.; Zhang, Q. Au-id: Automatic user identification and authentication through the motions captured from sequential human activities using rfid. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–26. [Google Scholar] [CrossRef]
  2. Liu, C.; Xiong, J.; Cai, L.; Feng, L.; Chen, X.; Fang, D. Beyond respiration: Contactless sleep sound-activity recognition using RF signals. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–22. [Google Scholar] [CrossRef]
  3. Chen, L.; Xiong, J.; Chen, X.; Lee, S.I.; Zhang, D.; Yan, T.; Fang, D. LungTrack: Towards contactless and zero dead-zone respiration monitoring with commodity RFIDs. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–22. [Google Scholar] [CrossRef] [Green Version]
  4. Cao, R.; Yang, X.; Zhou, M.; Xie, L. Device-Free Human Activity Recognition Based on Channel Statement Information. In Proceedings of the International Conference in Communications, Signal Processing, and Systems, Changbaishan, China, 22–23 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 835–838. [Google Scholar]
  5. Zhang, F.; Niu, K.; Xiong, J.; Jin, B.; Gu, T.; Jiang, Y.; Zhang, D. Towards a diffraction-based sensing approach on human activity recognition. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–25. [Google Scholar] [CrossRef]
  6. Guo, X.; Liu, J.; Shi, C.; Liu, H.; Chen, Y.; Chuah, M.C. Device-free personalized fitness assistant using WiFi. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–23. [Google Scholar] [CrossRef]
  7. Zeng, Y.; Wu, D.; Xiong, J.; Yi, E.; Gao, R.; Zhang, D. FarSense: Pushing the range limit of WiFi-based respiration sensing with CSI ratio of two antennas. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–26. [Google Scholar] [CrossRef] [Green Version]
  8. Li, S.; Liu, Z.; Zhang, Y.; Lv, Q.; Niu, X.; Wang, L.; Zhang, D. WiBorder: Precise Wi-Fi based boundary sensing via through-wall discrimination. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–30. [Google Scholar] [CrossRef]
  9. Zeng, Y.; Wu, D.; Xiong, J.; Liu, J.; Liu, Z.; Zhang, D. MultiSense: Enabling multi-person respiration sensing with commodity wifi. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–29. [Google Scholar] [CrossRef]
  10. Venkatnarayan, R.H.; Shahzad, M.; Yun, S.; Vlachou, C.; Kim, K.H. Leveraging Polarization of WiFi Signals to Simultaneously Track Multiple People. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–24. [Google Scholar] [CrossRef]
  11. Wu, D.; Gao, R.; Zeng, Y.; Liu, J.; Wang, L.; Gu, T.; Zhang, D. FingerDraw: Sub-wavelength level finger motion tracking with WiFi signals. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–27. [Google Scholar] [CrossRef] [Green Version]
  12. Li, X.; Zhang, D.; Xiong, J.; Zhang, Y.; Li, S.; Wang, Y.; Mei, H. Training-free human vitality monitoring using commodity Wi-Fi devices. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–25. [Google Scholar] [CrossRef]
  13. Wang, T.; Zhang, D.; Zheng, Y.; Gu, T.; Zhou, X.; Dorizzi, B. C-FMCW based contactless respiration detection using acoustic signal. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 1, 1–20. [Google Scholar] [CrossRef]
  14. Zhang, F.; Wang, Z.; Jin, B.; Xiong, J.; Zhang, D. Your smart speaker can “hear” your heartbeat! Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–24. [Google Scholar] [CrossRef]
  15. Hou, Y.; Wang, Y.; Zheng, Y. TagBreathe: Monitor breathing with commodity RFID systems. In Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), Atlanta, GA, USA, 5–8 June 2017; pp. 404–413. [Google Scholar]
  16. Zeng, Y.; Wu, D.; Gao, R.; Gu, T.; Zhang, D. FullBreathe: Full human respiration detection exploiting complementarity of CSI phase and amplitude of WiFi signals. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–19. [Google Scholar] [CrossRef]
  17. Adib, F.; Mao, H.; Kabelac, Z.; Katabi, D.; Miller, R.C. Smart homes that monitor breathing and heart rate. In Proceedings of the 33rd annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 837–846. [Google Scholar]
  18. Zhang, F.; Chang, Z.; Xiong, J.; Zheng, R.; Ma, J.; Niu, K.; Jin, B.; Zhang, D. Unlocking the beamforming potential of lora for long-range multi-target respiration sensing. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2021, 5, 1–25. [Google Scholar] [CrossRef]
  19. Khan, I.U.; Afzal, S.; Lee, J.W. Human activity recognition via hybrid deep learning based model. Sensors 2022, 22, 323. [Google Scholar] [CrossRef]
  20. Andrade-Ambriz, Y.A.; Ledesma, S.; Ibarra-Manzano, M.A.; Oros-Flores, M.I.; Almanza-Ojeda, D.L. Human activity recognition using temporal convolutional neural network architecture. Expert Syst. Appl. 2022, 191, 116287. [Google Scholar] [CrossRef]
  21. Mohottala, S.; Samarasinghe, P.; Kasthurirathna, D.; Abhayaratne, C. Graph neural network based child activity recognition. arXiv 2022, arXiv:2212.09013. [Google Scholar]
  22. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  23. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  24. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. Supplementary material for ‘ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 13–19. [Google Scholar]
  25. Bouchard, K.; Maitre, J.; Bertuglia, C.; Gaboury, S. Activity recognition in smart homes using UWB radars. Procedia Comput. Sci. 2020, 170, 10–17. [Google Scholar] [CrossRef]
  26. Shah, S.A.; Fioranelli, F. Human activity recognition: Preliminary results for dataset portability using FMCW radar. In Proceedings of the 2019 International Radar Conference (RADAR), Toulon, France, 23–27 September 2019; pp. 1–4. [Google Scholar]
  27. Luvisotto, M.; Tramarin, F.; Vangelista, L.; Vitturi, S. On the use of LoRaWAN for indoor industrial IoT applications. Wirel. Commun. Mob. Comput. 2018, 2018, 3982646. [Google Scholar] [CrossRef] [Green Version]
  28. Lin, K.; Hao, T. Experimental link quality analysis for LoRa-based wireless underground sensor networks. IEEE Internet Things J. 2020, 8, 6565–6577. [Google Scholar] [CrossRef]
  29. Davcev, D.; Mitreski, K.; Trajkovic, S.; Nikolovski, V.; Koteli, N. IoT agriculture system based on LoRaWAN. In Proceedings of the 2018 14th IEEE International Workshop on Factory Communication Systems (WFCS), Imperia, Italy, 13–15 June 2018; pp. 1–4. [Google Scholar]
  30. Magrin, D.; Centenaro, M.; Vangelista, L. Performance evaluation of LoRa networks in a smart city scenario. In Proceedings of the 2017 IEEE International Conference on communications (ICC), Paris, France, 21–25 May 2017; pp. 1–7. [Google Scholar]
  31. Di Renzone, G.; Parrino, S.; Peruzzi, G.; Pozzebon, A. LoRaWAN in motion: Preliminary tests for real time low power data gathering from vehicles. In Proceedings of the 2021 IEEE International Workshop on Metrology for Automotive (MetroAutomotive), Bologna, Italy, 1–2 July 2021; pp. 232–236. [Google Scholar]
  32. Zhang, F.; Chang, Z.; Niu, K.; Xiong, J.; Jin, B.; Lv, Q.; Zhang, D. Exploring lora for long-range through-wall sensing. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 1–27. [Google Scholar] [CrossRef]
  33. Wang, L.; Liu, R. Human activity recognition based on wearable sensor using hierarchical deep LSTM networks. Circuits Syst. Signal Process. 2020, 39, 837–856. [Google Scholar] [CrossRef]
  34. Tang, C.I.; Perez-Pozuelo, I.; Spathis, D.; Mascolo, C. Exploring contrastive learning in human activity recognition for healthcare. arXiv 2020, arXiv:2011.11542. [Google Scholar]
  35. Arduino. 2015. Available online: https://store-usa.arduino.cc/products/arduino-uno-rev3/?selectedStore=us (accessed on 1 January 2023).
  36. USRP B210. 2013. Available online: https://www.ettus.com/all-products/UB210-KIT/ (accessed on 1 January 2023).
  37. GNURadio. 2006. Available online: https://www.gnuradio.org/ (accessed on 1 January 2023).
Figure 1. Amplitude of walk and pickup activity.
Figure 1. Amplitude of walk and pickup activity.
Electronics 12 00629 g001
Figure 2. The proposed LoRa and deep learning-based activity recognition framework.
Figure 2. The proposed LoRa and deep learning-based activity recognition framework.
Electronics 12 00629 g002
Figure 3. The signal amplitude received by the two receiving antennas.
Figure 3. The signal amplitude received by the two receiving antennas.
Electronics 12 00629 g003
Figure 4. The signal ratio of the two receiving antennas.
Figure 4. The signal ratio of the two receiving antennas.
Electronics 12 00629 g004
Figure 5. Waveforms of data before and after filtering of amplitude and phase.
Figure 5. Waveforms of data before and after filtering of amplitude and phase.
Electronics 12 00629 g005
Figure 6. The data waveform after averaging the filtered signals of amplitude and phase.
Figure 6. The data waveform after averaging the filtered signals of amplitude and phase.
Electronics 12 00629 g006
Figure 7. TPN backbone network structure.
Figure 7. TPN backbone network structure.
Electronics 12 00629 g007
Figure 8. The improved TPN backbone network structure.
Figure 8. The improved TPN backbone network structure.
Electronics 12 00629 g008
Figure 9. Experiment apparatus.
Figure 9. Experiment apparatus.
Electronics 12 00629 g009
Figure 10. Experimental site layout diagram.
Figure 10. Experimental site layout diagram.
Electronics 12 00629 g010
Figure 11. Flow chart of the signal received by GnuRadio.
Figure 11. Flow chart of the signal received by GnuRadio.
Electronics 12 00629 g011
Figure 12. Waveforms of amplitude and phase data for different activities.
Figure 12. Waveforms of amplitude and phase data for different activities.
Electronics 12 00629 g012
Figure 13. Training process of raw data and amplitude phase.
Figure 13. Training process of raw data and amplitude phase.
Electronics 12 00629 g013
Figure 14. The main metrics of improved TPN backbone network based on amplitude and phase.
Figure 14. The main metrics of improved TPN backbone network based on amplitude and phase.
Electronics 12 00629 g014
Figure 15. Main metrics of the experimental model.
Figure 15. Main metrics of the experimental model.
Electronics 12 00629 g015
Table 1. Several common wireless protocol parameters.
Table 1. Several common wireless protocol parameters.
Wireless ProtocolSensing RangeDeploymentThe Effect of Through-Wall
RFID [15]4 mGeneralGeneral
Wi-Fi [16]3.7 mEasyBad
FMCW radar [17]8 mEasyBad
LoRa [18]25 mEasyGreat
Table 2. Recognition effects of different features and statistics.
Table 2. Recognition effects of different features and statistics.
Raw DataAmplitudePhaseMean Absolute DeviationVarianceFrst-Order DifferencePrecision
0.92
0.89
0.92
0.55
0.51
0.89
0.94
0.86
0.92
0.93
0.91
0.97
0.90
0.91
0.91
0.94
0.91
0.91
0.64
0.85
0.88
Table 3. Data distribution.
Table 3. Data distribution.
Activity CategoriesNumber of SamplesPercentage
Stand39916.6%
Walking40016.7%
Jogging40016.7%
Squat39416.5%
Pickup39916.6%
Empty40016.7%
Table 4. The main metrics of improved TPN backbone network based on amplitude and phase.
Table 4. The main metrics of improved TPN backbone network based on amplitude and phase.
Activity CategoriesPrecisionRecall RateF1-ScoreSupport
Jogging1.000.910.9580
Walking0.921.000.9680
Pickup0.990.910.9578
Squat0.940.990.9683
Stand0.970.990.9878
Empty0.991.000.9980
Accuracy 0.97479
Table 5. Activity classification precision of all experimental models.
Table 5. Activity classification precision of all experimental models.
ModelsPrecision
KNN0.70
SVM0.61
Decision Tree0.63
LSTM0.41
BiLSTM0.54
GRU0.94
DeepConvLSTM0.60
TPN Backbone Network0.95
TPN-ECA0.97
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiao, Y.; Chen, Y.; Nie, M.; Zhu, T.; Liu, Z.; Liu, C. Exploring LoRa and Deep Learning-Based Wireless Activity Recognition. Electronics 2023, 12, 629. https://doi.org/10.3390/electronics12030629

AMA Style

Xiao Y, Chen Y, Nie M, Zhu T, Liu Z, Liu C. Exploring LoRa and Deep Learning-Based Wireless Activity Recognition. Electronics. 2023; 12(3):629. https://doi.org/10.3390/electronics12030629

Chicago/Turabian Style

Xiao, Yang, Yunfan Chen, Mingxing Nie, Tao Zhu, Zhenyu Liu, and Chao Liu. 2023. "Exploring LoRa and Deep Learning-Based Wireless Activity Recognition" Electronics 12, no. 3: 629. https://doi.org/10.3390/electronics12030629

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop