Recent Progress in Sensing and Computing Techniques for Human Activity Recognition and Motion Analysis

: The recent scientiﬁc and technical advances in Internet of Things (IoT) based pervasive sensing and computing have created opportunities for the continuous monitoring of human activities for di ﬀ erent purposes. The topic of human activity recognition (HAR) and motion analysis, due to its potentiality in human–machine interaction (HMI), medical care, sports analysis, physical rehabilitation, assisted daily living (ADL), children and elderly care, has recently gained increasing attention. The emergence of some novel sensing devices featuring miniature size, a light weight, and wireless data transmission, the availability of wireless communication infrastructure, the progress of machine learning and deep learning algorithms, and the widespread IoT applications has promised new opportunities for a signiﬁcant progress in this particular ﬁeld. Motivated by a great demand for HAR-related applications and the lack of a timely report of the recent contributions to knowledge in this area, this investigation aims to provide a comprehensive survey and in-depth analysis of the recent advances in the diverse techniques and methods of human activity recognition and motion analysis. The focus of this investigation falls on the fundamental theories, the innovative applications with their underlying sensing techniques, data fusion and processing, and human activity classiﬁcation methods. Based on the state-of-the-art, the technical challenges are identiﬁed, and future perspectives on the future rich, sensing, intelligent IoT world are given in order to provide a reference for the research and practices in the related ﬁelds.


Introduction
In the future information systems featuring rich sensing, wireless inter-connection, and data analytics, which are usually denoted as Internet of Things (IoT) and cyber-physical systems (CPS) [1,2], human motions and activities can be recognized to perform intelligent actions and recommendations. The monitoring, recognition, and in-depth analysis of human gesture, posture, gait, motion, and other daily activities has received growing attention in a number of application domains, and significant progress has been made, attributing to the latest technology development and application demands [3]. As a result, new sensing techniques and mathematical methods are dedicated to the numerical analysis of the posture and activities of human body parts, and plenty of related studies are found in the literature.
techniques play an important role in guaranteeing the efficiency and accuracy of the recognition and analysis of human activities.
Admittedly, due to the potential in the different application fields, there has been a great demand for accurate human activity recognition techniques that can lead to the convenience, efficiency, and intelligence of information systems, new opportunities of medical treatment, more convenient daily living assistance, etc. However, there lacks a timely report on the recent contributions of recent technical advances with an in-depth analysis of the underlying technical challenges and future perspectives. Motivated by both the great demand for a more efficient interaction between human and information systems and the lack of investigations about the new contributions to knowledge in the field, this paper aims to provide a comprehensive survey and indepth analysis of the recent advances in the diverse techniques and methods for human activity recognition and motion analysis.
The rest of this paper is organized as follows: Section 2 summarizes the innovative application regarding human activity recognition and motion analysis; Section 3 illustrates the fundamentals, including the common methodology, the modeling of human parts, and identifiable human activities. Sections 4 and 5 present the novel sensing techniques and mathematical methods, respectively. Then, Section 6 gives the underlying technical challenges and future perspectives, followed by Section 7, which concludes the work.

Latest Progress in HAR-Related Applications
The past few decades have witnessed unprecedented prosperity of electronics techniques and information systems, which have resulted in a revolutionary development in almost all aspects of technological domains, including aeronautics and astronautics, automotive industry, manufacturing and logistics, consumer electronics and entertainment, etc. Human activity recognition and motion analysis, due to its potential in wide areas of applications, has attracted much research interest and made remarkable progress in recent years. This section gives an overview of the latest technical progress and a summary of the application domains of human activity recognition and motion analysis.

Overview of the Latest Technical Progress
An overview of the latest technical progress of human activity recognition is given in Figure 1. The technical progresses of HAR are found to focus on the following aspects: new sensing device and methods, innovative mathematical methods, novel networking and computing paradigms, emerging consumer electronics, and convergence with different subject areas.

1.
New sensing devices and methods: The acquisition of raw data is the first step for accurate and effective activity recognition. The cost reduction of the electronic devices has accelerated the Electronics 2020, 9,1357 4 of 19 pervasive sensing and computing systems, such as location and velocity tracking [32]. On account of the sensing techniques, many new techniques that were previously not possible for their cost, size, and technical readiness are now introduced for human activity related studies in addition to the traditional video cameras (including depth cameras), FMCW radar, CW-doppler radar, WiFi, ultrasound, radio frequency identification (RFID), and wearable IMU and electromyography (EMG) sensors, etc. [15][16][17][18]. Among the above candidates, FMCW radar, CW-doppler radar, WiFi, ultrasound, and RFID are both NLOS and contactless. Wearable IMU is NLOS and body-worn, for which the applications are not limited to specific areas. The micro-electro-mechanical system (MEMS) IMUs, due to their advantages in being low-power, low-cost, miniature-sized, as well as able to output rich sensing information, have become a dominant technical approach for HAR studies [33,34].

2.
Innovative of mathematical methods: To take advantage of the sensor data, an appropriate modeling of human body parts activities, the pre-processing of raw data, feature extraction, classification, and application-oriented analysis are pivotal enablers for the success of HAR-related functions. With respect to pre-processing, a number of lightweight, time-domain filtering algorithms, which are appropriate for resource-limited computing, including KF, extended Kalman filter (EKF), unscented Kalman filter (UKF), and Mahony complementary filter, are common alternatives that deal with wearable electronics [35,36]. In terms of feature extraction, the time-domain and frequency-domain features, including mean, deviation, zero crossing rate, and statistical characteristics of amplitude and frequency, are the most fundamental parameters [37]. Fast Fourier transform (FFT), as a classical algorithm, is the main solution for the frequency-domain feature analysis. With respect to classification, much effort in classical machine learning, including DT, BN, PCA, SVM, KNN, and ANN, and deep learning methods, including CNN and RNN, has been devoted to this particular area in recent years [25,38].

3.
Novel networking and computing paradigms: The sensing and computing platforms are crucial factors for effective human activity recognition. For visual-based solutions, high-performance graphics processing units (GPUs) have paved the way for the intensive computations in HAR [39]. For wearable sensing solutions, many new pervasive and mobile networking and computing techniques for battery-powered electronics have been investigated. Various wireless networking and communication protocols, including Bluetooth, Bluetooth low energy (BLE), Zigbee, and WiFi were introduced for building a body area network (BAN) for HAR [40]. New computation paradigms, such as mobile computing and wearable computing, were proposed to handle the location-free and resource-limited computing scenarios, and sometimes the computation is conducted on an 8-bit or 16-bit micro-controller unit (MCU) [41]. The novel networking and computing paradigms customized for HAR are more efficient and flexible for the related investigation and practices. 4.
Emerging consumer electronics for HAR: Another evidence of the progress in HAR is the usage of emerging HAR consumer electronics, including Kinect, Fibaro, the Mi band, and Fitbit. The Kinect-based somatosensory game is a typical use case of recognizing human motion as an input for human-computer interactions [42]. Fibaro detects human motion and changes in location for smart home applications [43]. Some wearable consumer electronics, such as the Mi band and Fitbit, can provide users with health and motion parameters, including heartbeats, intensity of exercises, walking or running steps, sleeping quality evaluation, etc. [44,45]. In addition, there are electronic devices such as MAX25205 (Maxim, San Jose, USA) used for gesture sensing in automotive applications using an IR-based sensor, where hand swipe gestures, finger, and hand rotation can be recognized [46]. These devices have stepped into people's daily lives to assist people to better understand their health and physical activities, or to perform automatic control and intelligent recommendation via HAR.

5.
Convergence with different subject areas: The HAR techniques were also found to be converging with many other subject areas and thus continually allowing for new applications to be created. Typically, HAR is merging with medical care, and this has resulted in some medical treatment and physical rehabilitation methods of diseases such as stroke and Parkinson's disease [10,12]. HAR has also been introduced in sports for sports analysis for the purpose of enhancing athletes' performance [7,8]. HAR-assisted daily living is another example as a field of application, where power consumption, home appliance control, and intelligent recommendations can be implemented to customize the living environment to people's preferences [2,47].

Widespread Application Domains
Driven by the subtantial technical progress in the related fields, HAR has been extended to a wide spectrum of application domains. Since human involvement is the most critical part for many information systems, the introduction of HAR can potentially result in greater efficiency and intelligence of the interaction between human and information systems, which also creates new opportunities for the human body or human activity related studies and applications. The fields of applications/domains are summrized as follows: 1.
Human-machine interaction (HMI)-HAR is used to recognize the gesture, posture, or motion of human body parts and use the results as inputs for information systems, and to perform efficient interactions or provide people intelligent recommendations [48,49].

2.
Medical care/physical rehabilitation-HAR is used to record the motion trajectory of human body parts, conduct numerical analysis, and the results can be used as reference for medical treatment [12,50].

3.
Sports analysis-HAR is used to record information such as the acceleration, angular velocity, velocity, etc., for numerical analysis of human body parts motions in order to evaluate the performance of sports and advise improvement strategies [7,51].

4.
Assisted daily living-HAR is used to recognize people's gesture and motions for home appliance control, or analyze people's activities and habits for living environment customizations and optimization.

5.
Children and elderly care-HAR is used to analyze children's or elderly people's activities, such as fall detection and disease recognition and to perform assistance or report warning to caregivers [52,53]. 6.
Human intent analysis-HAR is used to analyze people's motions and activities in order to predict people's intents in a passive way for particular applications, such as intelligent recommendation, crime detection in public areas, etc. [54].
Although many new techniques and methods have been introduced in HAR for different applications and remarkable progress has been made, there is still large room for improving the performance of the methods and systems. More convenient, lightweight, powerful computation sensing devices and systems, more accurate classification algorithms, and some application-oriented studies will still continually gain attention and play a more important role in various information systems and in people's daily lives.

Fundamentals
In order to give a comprehensive understanding of the HAR techniques and methods, this section presents the fundamentals of HAR regarding the sensing devices and data processing techniques. Since there are many different sensing devices, pre-processing, and classification techniques involved, the abstract, common methodology covering the different techniques and methods is illustrated, and the correspondence between the identifiable human activities with the innovative applications is reported as well.

Common Methodology
In view of the sensing and processing techniques for HAR, the methods can be described with a common methodology as shown in Figure 2, which may be applicable for both optical tracking and wearable sensing techniques [55,56]. The common methodology consists of four steps: data acquisition and pre-processing, segmentation, feature extraction, and classification [57].
analysis, and the sliding window technique is usually used to handle the activities between segments. 3. Feature extraction-The features of activities including joint angle, acceleration, velocity and angular velocity, relative angle and position, etc., can be directly obtained for further analysis. There may also be indirect features that can be used for the analysis of particular activities. 4. Classification/model application-The extracted features can be used by various machine learning and deep learning algorithms for the classification and recognition of human activities. Some explicit models, such as skeleton segment kinematic models, can be applied directly for decision-making.

Modeling of Human Body Parts for HAR
Since the human body is a systematic whole, a primary step in using HAR is to build a mechanical and posture kinematic model for the motion of human body parts. Human skeleton joints are the key features for optical tracking based HAR methods, where markers are sometimes applied for human gesture and posture recognition as well as motion tracking [53,58]. Figure 3 shows a skeleton-based multi-section model for the hand and human body. For wearable IMU-based HAR, the multi-segment model of human skeleton is also used as a basis to build a kinematic model of body segments. For example, the kinematic model of an elbow joint can be estabished with two segments connected by two revolute joints, allowing two degrees of freedom (DoF), as shown in Figure 4 [59]; the DoF kinematic model of the left leg, including three DoF angle joints, one DoF knee joint, and three DoF hip joints, is presented in [60], and the kinematics of the ankle to the hip and trunk is discussed in [61]. With the multi-segment model and kinematic model, the posture and the motion of human body parts and the whole body can be estimated with the angle and position data obtained with sensor devices.

1.
Data acquisition and pre-processing-Motion sensors such as 3D cameras, IMU, IR-radar, CW radar, and ultrasound arrays, either mounted onto a fixed location or body-worn, are used to obtain raw data including human activity information, such as position, acceleration, velocity, angular velocity, and angles. In addition, various noise cancellation techniques, either time-domain or frequency-domain ones, need to be applied to optimize the quality of the signal for further processing.

2.
Segmentation-Human motion is normally distributed in particular time spans, either occasionally or periodically. The signals need to be split into window segments for activity analysis, and the sliding window technique is usually used to handle the activities between segments.

3.
Feature extraction-The features of activities including joint angle, acceleration, velocity and angular velocity, relative angle and position, etc., can be directly obtained for further analysis. There may also be indirect features that can be used for the analysis of particular activities. 4.
Classification/model application-The extracted features can be used by various machine learning and deep learning algorithms for the classification and recognition of human activities. Some explicit models, such as skeleton segment kinematic models, can be applied directly for decision-making.

Modeling of Human Body Parts for HAR
Since the human body is a systematic whole, a primary step in using HAR is to build a mechanical and posture kinematic model for the motion of human body parts. Human skeleton joints are the key features for optical tracking based HAR methods, where markers are sometimes applied for human gesture and posture recognition as well as motion tracking [53,58]. Figure 3 shows a skeleton-based multi-section model for the hand and human body. For wearable IMU-based HAR, the multi-segment model of human skeleton is also used as a basis to build a kinematic model of body segments. For example, the kinematic model of an elbow joint can be estabished with two segments connected by two revolute joints, allowing two degrees of freedom (DoF), as shown in Figure 4 [59]; the DoF kinematic model of the left leg, including three DoF angle joints, one DoF knee joint, and three DoF hip joints, is presented in [60], and the kinematics of the ankle to the hip and trunk is discussed in [61]. With the multi-segment model and kinematic model, the posture and the motion of human body parts and the whole body can be estimated with the angle and position data obtained with sensor devices.

Identifiable Human Activities
Since human body parts are not rigid structures and are, instead, structures that deform and move in different ways following complex kinematic patterns, human activity parameters that belong to these parts are very hard to measure. Many new technical solutions are customized for human activity measurment, and remarkable progress has been made. The identifiable human activities are summarized in Table 1.

Identifiable Human Activities
Since human body parts are not rigid structures and are, instead, structures that deform and move in different ways following complex kinematic patterns, human activity parameters that belong to these parts are very hard to measure. Many new technical solutions are customized for human activity measurment, and remarkable progress has been made. The identifiable human activities are summarized in Table 1.

Identifiable Human Activities
Since human body parts are not rigid structures and are, instead, structures that deform and move in different ways following complex kinematic patterns, human activity parameters that belong to these parts are very hard to measure. Many new technical solutions are customized for human activity measurment, and remarkable progress has been made. The identifiable human activities are summarized in Table 1. Table 1. Identifiable human activities and example applications.

Human Body Parts Activities Example Applications
Hand Finger movement Rheumatoid arthritis treatment [11] Hand gesture Robot-assisted living [62] Hand movement Handwriting recognition [63] Upper limb Forearm movement Home health care [64] Arm swing Golf swing analysis [7] Elbow angle Stroke rehabilitation [59]

Human Body Parts Activities Example Applications
Lower limb Gait analysis Gait rehabilitation training [10] Knee angles in movement Risk assessment of knee injury [60] Ankle movement Hip and trunk kinematics [61] Stairs ascent/descent Human activity classification [65] Spine Swimming Swimming motion evaluation [51] Whole body Fall detection Elderly care [18] Whole-body posture Bicycle riding analysis [66] Whole-body movement Differentiating people [67] As shown in Table 1, many human body parts and the human body as a whole can be used for activity recognition for different purposes, including fingers, hand, arm, elbow, knee, ankle, and the whole body. Of course, the identifiable human activities and the corresponding applications are not limited to those listed in the table. Rapid technical progress has resulted in the continual addition of new possibilities and the creation of new applications in this particular field.

Novel Sensing Techniques
The appropriate measurement techniques and quality sensor data are the fundamentals for effective HAR and further applications. In this section, the sensing techniques for HAR are summarized with taxonomy and compared with proposed indexes. The sensor network solutions for wireless data collection and multi-sensor nodes for whole body monitoring are also discussed.

Taxonomy and Evaluation
According to the principles of sensor devices, the sensing techniques can be divided into six categories: optical tracking, radio frequency techniques, acoustic techniques, inertial measurement, force sensors, and EMG sensors. Each of the above categories may include a couple of different sensing techniques that may exhibit different performances in dealing with HAR applications. The feasibility of sensing techniques for HAR can be evaluated with five indicators: convenience of use, application scenarios, richness of information, the quality of raw data, and cost. The discussion and analysis of the sensing techniques are conducted with the abovementioned categories and evaluation indicators, which are as shown in Figure 5.  As shown in Table 1, many human body parts and the human body as a whole can be used for activity recognition for different purposes, including fingers, hand, arm, elbow, knee, ankle, and the whole body. Of course, the identifiable human activities and the corresponding applications are not limited to those listed in the table. Rapid technical progress has resulted in the continual addition of new possibilities and the creation of new applications in this particular field.

Novel Sensing Techniques
The appropriate measurement techniques and quality sensor data are the fundamentals for effective HAR and further applications. In this section, the sensing techniques for HAR are summarized with taxonomy and compared with proposed indexes. The sensor network solutions for wireless data collection and multi-sensor nodes for whole body monitoring are also discussed.

Taxonomy and Evaluation
According to the principles of sensor devices, the sensing techniques can be divided into six categories: optical tracking, radio frequency techniques, acoustic techniques, inertial measurement, force sensors, and EMG sensors. Each of the above categories may include a couple of different sensing techniques that may exhibit different performances in dealing with HAR applications. The feasibility of sensing techniques for HAR can be evaluated with five indicators: convenience of use, application scenarios, richness of information, the quality of raw data, and cost. The discussion and analysis of the sensing techniques are conducted with the abovementioned categories and evaluation indicators, which are as shown in Figure 5.

Sensing Techniques with Example Applications
To give a comprehensive overview and in-depth discussion of the sensing techniques for HAR and related studies, the sensing techniques of the five types and the corresponding typical applications are summarized in Table 2. Table 2. Sensing techniques with typical applications.
Optical tracking-Optical tracking with one or a few common cameras is the most traditional way of human activity recognition. The temporal and spatial features of the human body and the activities can be extracted with image processing algorithms. The classification that takes advantage of the features can be sorted out with machine learning or deep learning techniques. This method is competitive in accuracy and wide application domains, including entertainment, industrial surveillances, public security, etc. With the progress in high-speed cameras and depth sensors in recent years, more innovative investigations and applications are found in both academic studies and practical applications. The strengths of optical tracking methods are high accuracy, contactless monitoring, and rich activity information, while the weaknesses are the fixed location of use and potential risk of privacy leak.

2.
RF and acoustic techniques-The sensing technique working under the principle of radar includes IR-UWB, FMCW radar, WiFi, and ultrasound. The motion of human body parts or the whole body can be perceived using Doppler frequency shift (DFS), and some time-frequency signatures can be obtained for further analysis. Machine learning and deep learning techniques are widely used for classifications of human activities. Evidently, the strengths of this method are contactless measurement, NLOS, and no risk of privacy leak. Specifically, the commodity WiFi takes advantage of the existing communication infrastructure without added cost. Their weaknesses are the monotonousness and implicity of information provided by the sensors which requires specialized processing.

3.
Inertia sensors-The inertial sensors, especially the MEMS IMUs, became a dominant technical approach for human activity recognition and analysis. They provide 3-axis acceleration, 3-axis angular velocity, and 3-axis magnetometer signals, which can be employed for the estimation of attitude and motion of human body parts by mounting the devices on them. The strengths of IMUs in HAR are their miniature size, low cost, low power, and rich information output, which make them competitive for wearable applications that can be used without location constraints. Their weakness is their contact measurement, which may be inconvenient to people for daily activities recognition. 4.
Force sensors and EMG sensors-Force sensors may include piezoelectric sensors, FSR, and some thin film pressure sensors of different materials. They may provide pressure or multiple axis forces of human gait, hand grasp, etc., for sports analysis, physical rehabilitation or bio-feedback. EMG sensors work in a similar way as force sensors do and implement similar functions by using the EMG signal. Since EMG provides the muscle activities, it may provide more useful information for medical care, such as rehabilitation and amputation. The strengths of force sensors and EMG sensors are their capability in obtaining useful information of body part local areas. Their weakness is also the contact measurement, which may be inconvenient to people for daily activities recognition.

5.
Multiple sensor fusion-In addition to the above, there are also multiple sensor fusion based techniques that combine more than one of the above alternatives. For instance, inertial and magnetic sensors are combined for body segments motion observation [24]; optical sensor, IMU, and force sensor are combined for pose estimation in human bicycle riding [66]; knee electro-goniometer, foot-switches, and EMG are combined for gait phase and events recognition [95]; and FMCW radar and wearable IMUs are combined for fall detection [96]. The purpose of the combination is to compensate the limitations of one sensor technology by taking advantage of another so as to pursue maximal performance. For example, IMU and UWB radar are usually combined with real-time data fusion to overcome the limitations in continuous error accumulation of IMU, and NLOS shadowing and random noise of UWB [23].
On account of the different sensing techniques, they may behave differently with respect to the different evaluation indexes. In summary, for the convenience of use, the optical tracking and radar techniques are non-contact, but they are normally installed on particular locations. The inertial sensors can be wearable and provide rich useful information for human activity analysis, but contact measurement is involved, which may introduce inconvenience for users' daily activities. As a contact measurement method, force sensors and EMG can reveal information of body part local areas, which is competitive in medical care.

Body Area Sensor Networks
For the wearable sensing techniques, it is important to carry out the data collection without interrupting people's normal activities. Therefore, power cords data wires should be replaced with miniature batteries and wireless communication nodes. The optional wireless communication techniques which are commonly used include Bluetooth, WiFi, BLE, Zigbee, 4G/5G, NB-IoT, and LoRa, etc. [97] Since 4G/5G, NB-IoT, and LoRa are commonly used for long range data transmission for different purposes, Bluetooth, WiFi, BLE, and Zigbee are more likely to be chosen for short range wearable communications. A comparison of the wireless techniques is given in Table 3.
Since wearable sensing devices are powered with batteries, the energy consumption, data rate, and network topology are the key parameters to be considered when establishing the systems. Normally, Bluetooth is used for the data collection of single sensor devices: For example, Bluetooth is employed to transmit IMU data for sports analysis [7], force data of piezoelectric sensors for gait recognition [89], and IMU data for assessment of elbow spasticity [98]. Then, BLE and ZigBee are competitive in building a BAN for the data collection of multiple nodes systems; ZigBee is used for multiple IMU data collection in [66], while BLE-based wireless BAN (WBAN) was used for joint angle estimation in [99].

Mathematical Methods
Once the sensor data are obtained, how to make use of it is also a challenging task. As presented in the common methodology given in Figure 2, the key steps include pre-processing, segmentation, feature extraction, and classification/model application. In this section, we focus on the mathematical methods for the two key steps: feature extraction and classification.

Features and Feature Extraction
The feature extraction methods can be categorized into four types according to their outputs: heuristic, time-domain, time-frequency, or frequency-domain features [85,100], which are as shown in Table 4. Heuristic features are those derived and characterized by an intuitive understanding of how activities produce signal changes. Time-domain features are typically statistical measures obtained from a windowed signal that are not directly related to specific aspects of individual movements or postures. Time-frequency features, such as wavelet coefficients, are competitive for detecting the transitions between human activities [65]. Then, frequency-domain features are usually the preferred option for human activity recognition. Mathematical tools such as FFT, short-time Fourier transform (STFT), and discrete cosine transform (DCT) are the commonly used methods.  [60,71,80,95].
Frequency domain Spectral distribution, coefficients sum, spectral entropy, etc.
Time-frequency Approximation coefficients, detail coefficients, transition times

Classification and Decision-Making
For most human activity recognition studies, the classification to discriminate the different types of activities using the extracted features is a critical step. Due to the complexity in human gesture, posture, and daily activities, how to descriminiate them with certain accuracy using different mathematical models becomes a research focus. The classification can be classified into two categories:

1.
Threshold-based method-Threshold can be eaisly used to obtain many simple gestures, postures, and motions. For example, fall detection, hand shaking, and static standing can be recognized using the acceleration threshold calculated using equation a TH = a 2 x + a 2 y + a 2 z , and walking or running can be recognized using Doppler frquency shift thresholds.

2.
Machine learning techniques-Machine learning is a method of data analysis that automates analytical model building, which is suitable for the classification of human activities using sensor data. Its feasibility and efficiency have been demonstrated by many published studies with accuracies over 95%. Typically, HMM achieves an accuracy of 95.71% for gesture recognition [101]; SVM and KNN are employed for human gait classification between walking and running with an accurate over 99% [102]; PCA and SVM recognize three different actions for sports training with an accuracy of 97% [9]; KNN achieves an accuracy of 95% for dynamic activity classification [65]. There are also peer investigations providing evaluations of different machine learning methods [9,88,103].

3.
Deep learning methods-Deep learning techniques take advantages of many layers of non-linear information processing for supervised or unsupervised feature extraction and transformation, as well as for pattern analysis and classification. They presents advantages over traditional approaches and have gained continual research interest in many different fields, including HAR in recent years. For example, CNN achieves an accuracy of 97% for the recognition of nine different activities in [31], an accuracy of 95.87% for arm movement classification [64], an accuracy of 99.7% for sEMG-based gesture recognition in [93], and an accuracy of 95% for RFID based in-car activity recognition in [81]. The past investigations have demonstrated that deep learning technique is an effective way of classification for human activity recognition. There are also investigations applying long short-term memory (LSTM) [104] or bidirectional LSTM (Bi-LSTM) [76,96], as well as recurrent neural networks in sports and motion recognition, which have exhibited competitive performance compared to CNN.
Due to the superior performance of machine learning and deep learning techniques, many tool libraries-such as MATLAB machine learning toolbox, pandas, and scikit-learn [105]-for different development environments are available, which makes the implementation of the classification quite convenient. The different machine learning techniques will continually be considered the dominant tool for classification for various sensor data based human activity recognition.

Discussions
Driven by the related sensing, communication, and data processing techniques, HAR has undergone a rapid progress and is extended to widespread fields of applications. This section summarizes the state-ofthe-art, the underlying technical challenges, and the future trend.

Performance of the State-of-the-Art
Based on the above investigation, it is a critical issue to determine how to select the appropriate sensing techniques and mathematical methods for HAR applications. It is found that the technical solutions are highly dependent on the particular HAR tasks. For accurate indoor human activity recognition, optical depth sensors with PCA, HMM, SVM, KNN, and CNN are commonly seen to obtain an accuracy over 95%, and combination of UWB and IMU can also find applications with accuracy over 90%. For wearable outdoor applications of consumer electronics, sports analysis, and physical rehabilitation, a single IMU is a competitive choice, which normally uses Kalman filter or a complementary filter for noise cancellation and uses SVM or CNN for decision-making. The accuracy for walking and running step counting and other motion recognition is normally over 90%. Then, for medical care, physical rehabilitation or bio-feedback, thin film force sensors and EMG sensors are commonly used to obtain information of local areas of body parts. Deep learning is the most prevalent choices, which may result in an accuracy at about 90%. According to the analysis in Sections 4 and 5, each sensing technique and mathematical method has its own pros and cons and suitable application scenarios. The combination of two or more sensing techniques or mathematical methods may overcome the limitations of one technique by taking advantage of the others. body and human behaviors can be investigated for the further analysis. Research outputs may provide valuable reference for the studies of bionic robotics and for customized human intent estimation as well as smart recommendations in smart environments.
The above technical challenges may draw attention to the investigations of this field in the future, and the related studies about unobtrusive sensing, optimized sensing network and computing system, minor gesture and posture recognition, and further human body kinematics and behavior understanding are promising research topics that may add new features to HAR and create new opportunities of research and applications. Since HMI is a key attribute for the usability of information systems, HAR as a promising solution for the efficient and convenient interaction between human and inforamtion systems will play an important role in the future rich, sensing IoT world.

Conclusions
This article presented a comprehensive overview of the recent progress in the sensing techniques that have been introduced to human activity recognition and motion analysis. Remarkable technical progresses in new sensing devices and methods, innovative mathematical models for feature extraction and classification, novel networking and computing paradigms, convergence with different subject areas have been identified, which have extended to widespread application fields. To provide a comprehensive understanding of the fundamentals, the skeletal based multi-segment models and kinematic modeling of human body parts were firstly presented. Then, the sensing techniques were summarized and classified into six categories: optical tracking, RF sensors, acoustic sensors, inertial sensors, force sensors, and EMG sensors, followed by in-depth discussions about the pros and cons of the proposed evaluation indexes. Further to the sensing devices, the mathematical methods including feature extraction and classification techniques are discussed as well. According to the state-of-the-art HAR techniques, the technical challenges focused on were found to include the limitation of sensing techniques for convenient uses, dependency on PCs for data processing, difficulties in minor gesture recognition, and specificity in human activity recognition and motion analysis. The solutions for these challenges are considered to be the development trend of future studies. Since human activity recognition and moton analysis is a promising way for efficient interaction between human and information systems, it may play a more important role in future IoT enabled intelligient information systems.