Next Article in Journal
Optimization of an Impact-Based Frequency Up-Converted Piezoelectric Vibration Energy Harvester for Wearable Devices
Next Article in Special Issue
A Two-Step Feature Selection Radiomic Approach to Predict Molecular Outcomes in Breast Cancer
Previous Article in Journal
Human–Computer Interaction with a Real-Time Speech Emotion Recognition with Ensembling Techniques 1D Convolution Neural Network and Attention
Previous Article in Special Issue
Smartphones and Threshold-Based Monitoring Methods Effectively Detect Falls Remotely: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human Activity Recognition with an HMM-Based Generative Model

by
Narges Manouchehri
1,2,* and
Nizar Bouguila
2
1
Algorithmic Dynamics Lab, Unit of Computational Medicine, Karolinska Institute, 171 77 Stockholm, Sweden
2
Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC H3G1T7, Canada
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(3), 1390; https://doi.org/10.3390/s23031390
Submission received: 24 October 2022 / Revised: 11 January 2023 / Accepted: 20 January 2023 / Published: 26 January 2023
(This article belongs to the Special Issue Intelligent Systems for Clinical Care and Remote Patient Monitoring)

Abstract

:
Human activity recognition (HAR) has become an interesting topic in healthcare. This application is important in various domains, such as health monitoring, supporting elders, and disease diagnosis. Considering the increasing improvements in smart devices, large amounts of data are generated in our daily lives. In this work, we propose unsupervised, scaled, Dirichlet-based hidden Markov models to analyze human activities. Our motivation is that human activities have sequential patterns and hidden Markov models (HMMs) are some of the strongest statistical models used for modeling data with continuous flow. In this paper, we assume that emission probabilities in HMM follow a bounded–scaled Dirichlet distribution, which is a proper choice in modeling proportional data. To learn our model, we applied the variational inference approach. We used a publicly available dataset to evaluate the performance of our proposed model.

1. Introduction

Human activity recognition (HAR) has received significant attention in recent years. This topic is helping scientists to understand behaviors, anticipate intentions, predict probable actions, monitor healthcare, and assist in rehabilitation [1,2,3,4,5,6,7]. To collect data for discovering the patterns of human activities, various devices (e.g., advanced electronic devices and smart technologies) have been developed. This resulted in large amounts of data. HAR data have been conventionally collected by cameras and sensors [8,9,10,11]. As vision-based data could be collected easier compared to sensor-based data, they have been used previously. However, they have some critical challenges in instance-privacy issues. In contrast, cheap sensors do not have such disadvantages and became more practical methods used in collecting information. These tools have various types of sensors, such as wearable, ambient, and object-tagged sensors [12,13,14,15,16,17,18]. After collecting data, we need to choose an adequate model to capture the hidden patterns of human activities. In HAR, we face some challenges, such as difficulties in associating activities with various users, various patterns and styles for a single activity, similarities in activities, unpredictable and accidental events, noise, and difficulties in labeling. To overcome these challenges, several machine learning methods have been introduced [19,20,21,22,23,24,25]. Some platforms, such as deep learning, provide better results [26,27,28,29,30,31,32,33]. However, these approaches are data-hungry and they need considerable amounts of labeled data. Data annotation is another difficulty as it is time-consuming and expensive. Another problem is that deep learning platforms are not absolutely explainable in human terms [34,35,36,37,38,39,40,41,42,43,44,45,46,47]. HMMs are less computationally expensive compared to deep learning and could be easily adapted for real-time scenarios. Considering the discussed challenges, we focused on generative models. Hidden Markov models are powerful when dealing with temporal data. In the case of continuation in receiving data points and the interest in finding hidden states by modeling observable sequential data, HMM is a capable model. HMM has been used in various medical applications [48,49,50,51,52,53,54,55,56,57,58,59,60,61,62]. Moreover, it has been applied in human activity recognition [63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81]. In HMM, it is commonly assumed that emission probabilities are raised from Gaussian mixture models (GMM) [82,83,84,85,86,87]. However, the assumption of Gaussianity could not be valid for all cases. As a consequence, other alternatives and distributions have been applied [88]. In this work, we assume that emission probability distributions follow scaled Dirichlet mixture models, which improve the structure of HMM. In fact, scaled Dirichlet distribution is a proper choice for modeling proportional data [89]. We call our proposed model scaled Dirichlet HMM (SD-HMM). We apply variational inference (VI) as an elegant learning method for estimating model parameters.
VI is an approximation technique, more accurate compared to ML and faster than fully Bayesian inference [90,91,92,93,94,95,96,97,98]. Moreover, compared to a deterministic method, such as maximum likelihood, it does not suffer from convergence to a local maximum and over-fitting. Moreover, it is not as computationally complex as fully Bayesian. Finally, we evaluate our proposed model, SD-HMM, on publicly available datasets of human activity recognition. To evaluate our model, we compared it with the hidden Markov model with GMM emission probabilities as a widely used alternative.
In this work, we propose an improved version of the hidden Markov model, such that emission probabilities are raised from scaled Dirichlet mixture models. Then, we applied an elegant learning method, variational inference, to estimate parameters. we evaluated the performance of our proposed model and compared it with similar alternatives in human activity recognition.
The paper is organized as follows: In Section 2, we discuss HMM. In Section 3, we estimate model parameters with variational inferences. In Section 4, we present the results of evaluating our proposed model in human activity recognition. We conclude in Section 6.

2. Hidden Markov Model

In this section, we explain the first Markov chain. Let us consider a sequence of states or events. In the first-order Markov Model, it is assumed that the future state depends only on the current state. Thus, an event at a particular point in time t depends on the previous event at time t 1 . HMM is expressed by the following parameters:
  • Transition probability, which is the probability of altering the state at time t to the same state or another state at the time step t + 1 . The sum of all transition probabilities at the current state is equal to 1.
  • Emission probability, which indicates the probability of an observation generated from a particular state.
  • Initial probability π , in which HMM starts at time step of 0. The sum of all probabilities is equal to 1.
We express HMM with parameters θ = { B , C , φ , π } by the following notations:
  • A sequence of observations: X = { X 1 , , X T } generated by hidden states H = { h 1 , , h T }   h j [ 1 , K ] . K indicates the number of states.
  • Transition matrix: B = { b j j = P ( h t = j h t 1 = j ) } .
  • Emission matrix: C =   { C i j = P ( m t = j h t = i ) } for j [ 1 , M ] . M indicates the number of mixture components associated with the state j.
  • π j : Initial probability to start the sequence from state j.
φ includes two parameters of the SD mixture model. To modify conventional HMM, we first explain the SD distribution. Let us consider a D-dimensional observation X = x 1 , , x d , which is drawn from the SD distribution with two parameters, α and β .
SD ( X n φ ) = Γ ( d = 1 D α d ) d = 1 D Γ ( α d ) d = 1 D β d α d x d α d 1 ( d = 1 D β d x d ) d = 1 D α d
Γ is the Gamma function. d = 1 D α d = d = 1 D α d and φ = ( α , β ) where α = ( α 1 , , α D ) and β = ( β 1 , , β D ) are respective shape and scale parameters. These two parameters provide more flexibility in modeling various shapes of data. Moreover, we are assuming that X is a proportional vector. To express the modification of HMMs, we represent the estimates of states and mixture components by:
γ h t , m t t p ( h t , m t x 0 , , x T )
and the local states sequence given the whole observation set by:
ξ h t , h t + 1 t p ( h t , h t + 1 x 0 , , x T )
γ h t , m t t and ξ h t , h r + 1 t for all t [ 1 , T ] are responsibilities and computed by similar forward–backward procedures for HMM with Gaussian mixtures.

3. Parameter Estimation with Variational Inference

In this section, we discuss a powerful learning approach, variational inference, which inherits the advantages of deterministic and Bayesian inferences. This technique has more precise results compared to deterministic methods while being faster than fully Bayesian inference. The idea is to minimize the distance between the true posterior and an approximating distribution using the Kullback–Leibler (KL) divergence [90]. Moreover, With this approximation scheme, we can simultaneously estimate the model’s parameters and the optimal number of components. Here, we will explain the steps of variational learning to update all parameters of HMM for the transition distribution, mixing matrix, and shape parameters of emission distributions. As the first step, we need to define a prior distribution for each parameter. Given the model parameters, the likelihood of sequential observations X is defined as follows, where S and L are the respective sets of hidden states and the set of mixture components:
p ( X B , C , π , φ ) = S L π s 1 [ t = 2 T b s t 1 , s t ] [ t = 1 T c s t , m t p ( x t φ s t , m t ) ]
p ( X ) = d π d b d C d φ S , L p ( B , C , π , φ ) p ( X , S , L B , C , π , φ )
Considering that this quantity is not computable, we introduce a lower bound by using an approximating distribution q ( B , C , π , φ , S , L ) of the true posterior p ( B , C , π , φ , S , L X ) . So,
ln ( p ( X ) ) = ln { d B d C d π d φ S , L p ( B , C , π , φ ) p ( X , S , L B , C , π , φ ) } d π d B d C d α S , L q ( B , C , π , φ , S , L ) ln { p ( B , C , π , φ ) p ( X , S , L B , C , π , φ ) q ( B , C , π , φ , S , L ) }
Further to Jensen’s inequality and considering that K L ( q p ) 0 , K L ( q p ) = 0 if q is equal to the true posterior. L ( q ) is a lower bound to ln p ( X ) :
ln ( p ( X ) ) = L ( q ) KL ( q ( B , C , π , φ , S , L ) p ( B , C , π , φ , S , L X ) )
Now, with the help of the mean field theory, we define a restricted family of distributions to compute the true posterior distribution as it is not tractable:
q ( B , C , π , α , β , S , L ) = q ( B ) q ( C ) q ( π ) q ( α ) q ( β ) q ( S , L )
We define the lower bound as follows:
ln ( p ( X ) ) S , L d B d C d π d α d β q ( π ) q ( B ) q ( C ) q ( α ) q ( β ) q ( S , L ) { ln ( p ( π ) ) + ln ( p ( B ) ) + ln ( p ( C ) ) + ln ( p ( α ) ) + ln ( p ( β ) ) + ln ( π s 1 ) + t = 2 T ln ( a s t 1 , s t ) + t = 1 T ln ( c s t , m t ) + t = 1 T ln ( p ( x t α s t , m t ) ) ln ( q ( S , L ) ) ln ( q ( π ) ) ln ( q ( B ) ) ln ( q ( C ) ) ln ( q ( α ) ) } = F ( q ( π ) ) + F ( q ( C ) ) + F ( q ( B ) ) + F ( q ( α ) ) + F ( q ( β ) ) + F ( q ( S , L ) )
To define the priors for HMM parameters, we choose the Dirichlet distribution for B , C , and π , considering that all of the coefficients of these parameters are strictly positive, less than one, and their coefficients sum up to one.
p ( π ) = D ( π ϕ π ) = D ( π 1 , , π K ϕ 1 π , , ϕ K π )
p ( B ) = i = 1 K D ( b i 1 , , b i K ϕ i 1 B , , ϕ i K B )
p ( C ) = i = 1 M D ( c i 1 , , c i M ϕ i 1 C , , ϕ i M C )
For α and β , we choose Gamma and Dirichlet distributions, respectively:
p α j d = G α j d u j d , v j d = v j d u j d Γ u j d α j d u j d 1 e v j d α j d
p β j = D β j h j = Γ d = 1 D h j d d = 1 D Γ h j d d = 1 D β j d h j d 1
h j = h j 1 , , h j D . Gamma and Dirichlet distributions are defined as G ( .) and D ( . ) , respectively. Moreover, u j d , v j d , and h j d indicate positive hyperparameters.
Now, we explain the optimization of q ( B ) , q ( C ) , and q ( π ) as follows:
F ( q ( B ) ) = d B q ( B ) ln [ i = 1 K j = 1 K b i j w i j B 1 q ( B ) ]
w i j B = t = 2 T γ i j t B + ϕ i j B
γ i j t B q ( s t 1 = i , s t = j )
q ( B ) = i = 1 K D ( b i 1 , , b i K w i 1 B , , w i K B )
q ( π ) = D ( π 1 , , p i K w 1 π , , w K π )
w i π = γ i π + ϕ i π
γ i π q ( s 1 = i )
q ( C ) = i = 1 K D ( c i 1 , , c i M w i 1 C , , w i M C )
w i j C = t = 1 T γ i j t C + ϕ i j C
γ i j t C q ( s t = i , m t = j )
For q ( α ) and q ( β ) :
Q ( α ) = j = 1 M d = 1 D G α j d u j d * , v j d *
Q ( β ) = j = 1 M d = 1 D D β j d h j d *
u j d * = u j d + η j d , v j d * = v j d ϑ j d
η j d = i = 1 N Z p i j α ¯ j d ψ d = 1 D α ¯ j d ψ α ¯ j d + d s D ψ d = 1 D α ¯ j d × α ¯ j s ln α j s ln α ¯ j s
ϑ j d = i = 1 N Z p i j ln β ¯ j d + ln X i d ln d = 1 D β ¯ j d X i d
h j d * = h j d + τ j d
τ j d = i = 1 N Z p i j α ¯ j d α ¯ j d β ¯ j d X i d d = 1 D β ¯ j d X i d
If X p t is assigned to state i and mixture component j, Z p i j = 1 , otherwise, zero. To compute responsibilities, we apply a typical forward–backward procedure [99] where Z p i j = t = 1 T γ p i j t C = p ( s = i , m = j X ) .
In the expectation step, we keep fixed the parameters estimated in the previous step, and the updated values are computed as follows:
π i exp [ ln ( π i ) q ( π ) ] = exp [ Ψ ( w i π ) Ψ ( i w i π ) ]
b j j exp [ ln ( b j j ) q ( B ) ] = exp [ Ψ ( w j j B ) Ψ ( j w j j B ) ]
c i j exp [ ln ( c i j ) q ( C ) ] = exp [ Ψ ( w i j C ) Ψ ( j w i j C ) ]
Ψ and . are the Digamma function and expectation, respectively. Moreover, we update the shape and scaled parameters as follows [100]:
α ¯ i j d = u i j d v i j d
ln ( α i j d ) = Ψ ( u i j d ) ln ( v i j d )
β ¯ i j d = h i j d d = 1 D h i j d
Here, we summarize the whole procedure of the SD-based HMM in Algorithm 1:  
Algorithm 1: Variational learning of the SD-HMM model.
1.
    Initialize the shape and scale parameters of the SD distribution.
2.
    Define the initial responsibilities.
3.
    Compute w B , w C , and w π .
4.
    Initialize B , C , and π .
5.
    while |old likelihood - new likelihood| ≥ 0
6.
      Compute the data likelihood.
7.
      Compute the responsibilities with the forward–backward procedure.
8.
      Update the hyperparameters of the shape and scale parameters.
9.
      Update w B , w C , and w π using responsibilities γ B , γ C , and γ π .
10.
    Update B , C , and π using w B , w C , and w π .

4. Experimental Results

In this part, we evaluate our model on publicly available datasets. Here, we explain the procedure and dataset: We propose a novel clustering algorithm and compare our model with other methods. Before applying our model to the datasets, we removed all labels and we did not have any training or testing. After predicting labels by our proposed clustering algorithm, we compared the actual and predicted labels with each other. To assess the performance, we applied four following metrics where T P , T N , F P , and F N represent the total number of true positives, true negatives, false positives, and false negatives, respectively:
A c c u r a c y = T P + T N Total number of observations P r e c i s i o n = T P T P + F P , R e c a l l = T P T P + F N F 1 s c o r e = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
Normalization of features: Considering various ranges of features, we applied min–max scaling with the following formula, which helps in shifting and re-scaling the values and keep them between zero and one:
X = X X m i n X m a x X m i n
We tested our model on publicly available and real datasets, the so-called opportunity [101,102] and UCI HAR databases [103].

4.1. Opportunity Dataset

In this dataset, data were collected by external and wearable sensors. Volunteers attached wearable sensors to their clothes and the rest were attached to objects or installed at points of interest. Through this setting, various activities of different levels were recognized. The sensors used were (1) body-worn sensors, including 7 inertial measurement units (IMUs), 12 3D acceleration sensors, and 4 3D coordinates from a localization system. (2) Object sensors, including 12 objects, were instrumented with wireless sensors measuring 3D acceleration and the 2D rate of turn. (3) Ambient sensors, including 13 switches and 8 3D acceleration sensors in kitchen appliances and furniture. Data were collected from 4 individuals and 6 runs per user with 5 activities of daily living and 1 “drill” run, which is a scripted sequence of activities. We analyzed four daily activities of individuals standing, walking, lying, and sitting. We tested our model on 2 runs of individual activities.
  • Oversampling: Both datasets have 108 features; as illustrated in Table 1, there are considerable inequalities in the distribution of instances per cluster. As shown in the first run of the test, the percentages of four activities are 59.7%, 17.4%, 19.9%, and 3% for standing, walking, lying, and sitting, respectively. These shares for the second run are 41%, 23.8%, 5.1%, and 30%. As this challenge results in frequency bias, the model may be affected by the dominant class and learn from clusters, including more observations. We tackled this challenge with oversampling using a method called the synthetic minority over-sampling technique (SMOTE). In this method, new data points were generated by interpolating between observations in the original dataset. Thus, we obtain a balanced dataset. After this step, we had equal observations in 4 clusters with 22,380 and 10,379 in the first and second runs, respectively.
  • Missing values: In both datasets, we have several missing values, which are shown in Table 2 for the first and second runs, respectively. This is a typical issue, especially when we work on real datasets. We replaced missing values with the median of each feature to minimize the effects of the outliers. As we mentioned, we have 108 features.
After tackling the above-mentioned challenges, we tested our algorithm on these two datasets. Here, SD-HMM, D-HMM, and GMM-HMM stand for scaled Dirichlet-based HMM, Dirichlet-based HMM, and Gaussian-based HMM, respectively.

4.2. UCI Dataset

To collect data, the activities of 30 individuals who were between 19 and 48 years were analyzed. Each volunteer performed activities while wearing a smartphone on the waist. With the help of an embedded gyroscope and accelerometer, information related to the 3-axial 3-axial angular velocity and linear acceleration were captured. To label the data, a camera was applied for manual annotation. Similar to previous experiments, we analyzed the standing, walking, lying, and sitting of individuals. This dataset includes 561 features.

5. Results and Discussion

In the experimental part, we validated the performance of our proposed algorithm on three real-world datasets and compared it with D-HMM, which is similar to our proposed model. Moreover, we compared it with GMM-HMM as the most widely used alternative. In the opportunity dataset, we focused on two subsets and the results are demonstrated in Table 3 and Table 4. As shown in Table 3, SD-HMM outperforms GMM-HMM by 89.33%, 86.54%, 85.51%, and 86.02% in accuracy, precision, recall, and F1-score, respectively. In Table 4, we present the outcomes of our test on the second dataset. Similar to the previous case, SD-HMM robustness is enhanced by 87.12%, 87.28%, 85.44%, and 86.35% in accuracy, precision, recall, and F1-score, respectively. In the next part of our experiment which is dedicated to the UCI dataset, Table 5 indicates that SD-HMM has behavior similar to two previous cases and provides better results with 86.17%, 85.05%, 86.83%, and 85.93% in accuracy, precision, recall, and F1-score, respectively. Scaled Dirichlet distribution has one more parameter compared to Dirichlet distribution; this character provides more flexibility to SD-HMM. The main finding from the comparison of our novel model against the conventional model is that our proposed model can be considered an alternative.

6. Conclusions

In this work, we proposed scaled Dirichlet-based hidden Markov models. Our principal motivation was the various natures of data and the fact that the assumption of Gaussianity could not be generalized to all cases and scenarios. In recent years, other alternatives have been evaluated and they may provide better flexibility in fitting data. Scaled Dirichlet distribution has two parameters that modify the shape and scale of distribution and this character assists us in modeling asymmetric and various skewed forms. Here, we assumed that the scaled Dirichlet distribution is the source of emission probability distribution. With such modifications and improvements in the structure of GMM-HMM, we may achieve some robustness. After constructing our model, we learned that variational inference provides results with reasonable computational time and accuracy. To validate our model, we tested our proposed methodology on three datasets of human activity recognition. As this application has been applied in numerous medical domains, we intended to demonstrate the robustness of our novel method in such a demanding domain. Datasets used in this test were collected by wearable, object, and environmental sensors. The results of our evaluation indicate that our proposed method outperforms D-HMM and GMM-HMM. In future work, we will study the activities of several individuals and test other alternatives as emission probabilities. Moreover, we believe our work is more robust to outliers and we will introduce feature selection in the future.

Author Contributions

Conceptualization, N.M.; methodology, N.M.; software, N.M.; validation, N.M.; formal analysis, N.M.; investigation, N.M.; writing— original draft preparation, N.M.; visualization, N.M.; supervision and leadership, N.B.; writing—reviewing and editing, N.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Acknowledgments

The completion of this research was made possible thanks to the Natural Sciences and Engineering Research Council of Canada (NSERC).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ramanujam, E.; Perumal, T.; Padmavathi, S. Human activity recognition with smartphone and wearable sensors using deep learning techniques: A review. IEEE Sens. J. 2021, 21, 13029–13040. [Google Scholar] [CrossRef]
  2. Yadav, S.K.; Tiwari, K.; Pandey, H.M.; Akbar, S.A. A review of multimodal human activity recognition with special emphasis on classification, applications, challenges and future directions. Knowl.-Based Syst. 2021, 223, 106970. [Google Scholar] [CrossRef]
  3. Bouchabou, D.; Nguyen, S.M.; Lohr, C.; LeDuc, B.; Kanellos, I. A survey of human activity recognition in smart homes based on IoT sensors algorithms: Taxonomies, challenges, and opportunities with deep learning. Sensors 2021, 21, 6037. [Google Scholar] [CrossRef]
  4. Ferrari, A.; Micucci, D.; Mobilio, M.; Napoletano, P. Trends in human activity recognition using smartphones. J. Reliab. Intell. Environ. 2021, 7, 189–213. [Google Scholar] [CrossRef]
  5. Zhang, S.; Li, Y.; Zhang, S.; Shahabi, F.; Xia, S.; Deng, Y.; Alshurafa, N. Deep learning in human activity recognition with wearable sensors: A review on advances. Sensors 2022, 22, 1476. [Google Scholar] [CrossRef]
  6. Dahou, A.; Al-qaness, M.A.; Abd Elaziz, M.; Helmi, A. Human activity recognition in IOHT applications using arithmetic optimization algorithm and deep learning. Measurement 2022, 199, 111445. [Google Scholar] [CrossRef]
  7. Saha, S.; Bhattacharya, R. Human Activity Recognition Systems Based on Sensor Data Using Machine Learning. In Internet of Things Based Smart Healthcare; Springer: Singapore, 2022; pp. 121–150. [Google Scholar]
  8. Dang, L.M.; Min, K.; Wang, H.; Piran, M.J.; Lee, C.H.; Moon, H. Sensor-based and vision-based human activity recognition: A comprehensive survey. Pattern Recognit. 2020, 108, 107561. [Google Scholar] [CrossRef]
  9. Zhang, H.B.; Zhang, Y.X.; Zhong, B.; Lei, Q.; Yang, L.; Du, J.X.; Chen, D.S. A comprehensive survey of vision-based human action recognition methods. Sensors 2019, 19, 1005. [Google Scholar] [CrossRef] [Green Version]
  10. Ahad, M.A.R. Vision and Sensor-Based Human Activity Recognition: Challenges Ahead. In Advancements in Instrumentation and Control in Applied System Applications; IGI Global: Hershey, PA, USA, 2020; pp. 17–35. [Google Scholar]
  11. Antar, A.D.; Ahmed, M.; Ahad, M.A.R. Challenges in sensor-based human activity recognition and a comparative analysis of benchmark datasets: A review. In Proceedings of the 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Spokane, WA, USA, 30 May–2 June 2019; pp. 134–139. [Google Scholar]
  12. Prati, A.; Shan, C.; Wang, K.I.K. Sensors, vision and networks: From video surveillance to activity recognition and health monitoring. J. Ambient. Intell. Smart Environ. 2019, 11, 5–22. [Google Scholar]
  13. Ghosh, A.; Chakraborty, A.; Chakraborty, D.; Saha, M.; Saha, S. UltraSense: A non-intrusive approach for human activity identification using heterogeneous ultrasonic sensor grid for smart home environment. J. Ambient. Intell. Humaniz. Comput. 2019, 1–22. [Google Scholar] [CrossRef]
  14. Kalimuthu, S.; Perumal, T.; Yaakob, R.; Marlisah, E.; Babangida, L. Human Activity Recognition based on smart home environment and their applications, challenges. In Proceedings of the 2021 International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 4–5 March 2021; pp. 815–819. [Google Scholar]
  15. Rosales Sanabria, A.; Kelsey, T.W.; Dobson, S.; Ye, J. Representation learning for minority and subtle activities in a smart home environment. J. Ambient. Intell. Smart Environ. 2019, 11, 495–513. [Google Scholar] [CrossRef] [Green Version]
  16. Tahir, S.F.; Fahad, L.G.; Kifayat, K. Key feature identification for recognition of activities performed by a smart-home resident. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 2105–2115. [Google Scholar] [CrossRef]
  17. Cicirelli, F.; Fortino, G.; Giordano, A.; Guerrieri, A.; Spezzano, G.; Vinci, A. On the design of smart homes: A framework for activity recognition in home environment. J. Med. Syst. 2016, 40, 200. [Google Scholar] [CrossRef] [PubMed]
  18. Gil-Martín, M.; San-Segundo, R.; Fernández-Martínez, F.; de Córdoba, R. Human activity recognition adapted to the type of movement. Comput. Electr. Eng. 2020, 88, 106822. [Google Scholar] [CrossRef]
  19. Ramasamy Ramamurthy, S.; Roy, N. Recent trends in machine learning for human activity recognition—A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1254. [Google Scholar] [CrossRef]
  20. Arshad, M.H.; Bilal, M.; Gani, A. Human Activity Recognition: Review, Taxonomy and Open Challenges. Sensors 2022, 22, 6463. [Google Scholar] [CrossRef]
  21. Jain, Y.; Tang, C.I.; Min, C.; Kawsar, F.; Mathur, A. ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2022, 6, 1–28. [Google Scholar] [CrossRef]
  22. Ige, A.O.; Noor, M.H.M. A survey on unsupervised learning for wearable sensor-based activity recognition. Appl. Soft Comput. 2022, 127, 109363. [Google Scholar] [CrossRef]
  23. Ariza Colpas, P.; Vicario, E.; De-La-Hoz-Franco, E.; Pineres-Melo, M.; Oviedo-Carrascal, A.; Patara, F. Unsupervised human activity recognition using the clustering approach: A review. Sensors 2020, 20, 2702. [Google Scholar] [CrossRef]
  24. Kafle, S.; Dou, D. A heterogeneous clustering approach for Human Activity Recognition. In Proceedings of the International Conference on Big Data Analytics and Knowledge Discovery, DaWaK 2016, Porto, Portugal, 6–8 September 2016; Springer: Cham, Switzerland, 2016; pp. 68–81. [Google Scholar]
  25. Ma, H.; Zhang, Z.; Li, W.; Lu, S. Unsupervised human activity representation learning with multi-task deep clustering. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2021, 5, 1–25. [Google Scholar] [CrossRef]
  26. Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep learning for sensor-based activity recognition: A survey. Pattern Recognit. Lett. 2019, 119, 3–11. [Google Scholar] [CrossRef] [Green Version]
  27. Chen, K.; Zhang, D.; Yao, L.; Guo, B.; Yu, Z.; Liu, Y. Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities. ACM Comput. Surv. (CSUR) 2021, 54, 1–40. [Google Scholar] [CrossRef]
  28. Nweke, H.F.; Teh, Y.W.; Al-Garadi, M.A.; Alo, U.R. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Syst. Appl. 2018, 105, 233–261. [Google Scholar] [CrossRef]
  29. Sharma, O. Deep challenges associated with deep learning. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India, 14–16 February 2019; pp. 72–75. [Google Scholar]
  30. Whang, S.E.; Roh, Y.; Song, H.; Lee, J.G. Data collection and quality challenges in deep learning: A data-centric ai perspective. VLDB J. 2023, 1–23. [Google Scholar] [CrossRef]
  31. Hong, S.; Zhou, Y.; Shang, J.; Xiao, C.; Sun, J. Opportunities and challenges of deep learning methods for electrocardiogram data: A systematic review. Comput. Biol. Med. 2020, 122, 103801. [Google Scholar] [CrossRef] [PubMed]
  32. Liu, X.; Xie, L.; Wang, Y.; Zou, J.; Xiong, J.; Ying, Z.; Vasilakos, A.V. Privacy and security issues in deep learning: A survey. IEEE Access 2020, 9, 4566–4593. [Google Scholar] [CrossRef]
  33. Xu, G.; Li, H.; Ren, H.; Yang, K.; Deng, R.H. Data security issues in deep learning: Attacks, countermeasures, and opportunities. IEEE Commun. Mag. 2019, 57, 116–122. [Google Scholar] [CrossRef]
  34. Vellido, A. The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput. Appl. 2020, 32, 18069–18083. [Google Scholar] [CrossRef] [Green Version]
  35. Gunning, D. Explainable artificial intelligence (xai). Def. Adv. Res. Proj. Agency (DARPA) Web 2017, 2, 1–36. [Google Scholar] [CrossRef] [Green Version]
  36. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Briefings Bioinform. 2018, 19, 1236–1246. [Google Scholar] [CrossRef]
  37. Tjoa, E.; Guan, C. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4793–4813. [Google Scholar] [CrossRef] [PubMed]
  38. Samek, W.; Müller, K.R. Towards explainable artificial intelligence. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer: Cham, Switzerland, 2019; pp. 5–22. [Google Scholar]
  39. Gordon, L.; Grantcharov, T.; Rudzicz, F. Explainable artificial intelligence for safe intraoperative decision support. JAMA Surg. 2019, 154, 1064–1065. [Google Scholar] [CrossRef] [PubMed]
  40. Longo, L.; Goebel, R.; Lecue, F.; Kieseberg, P.; Holzinger, A. Explainable artificial intelligence: Concepts, applications, research challenges and visions. In Proceedings of the International Cross-Domain Conference for Machine Learning and Knowledge Extraction, CD-MAKE 2020, Dublin, Ireland, 25–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 1–16. [Google Scholar]
  41. Payrovnaziri, S.N.; Chen, Z.; Rengifo-Moreno, P.; Miller, T.; Bian, J.; Chen, J.H.; Liu, X.; He, Z. Explainable artificial intelligence models using real-world electronic health record data: A systematic scoping review. J. Am. Med. Inform. Assoc. 2020, 27, 1173–1185. [Google Scholar] [CrossRef] [PubMed]
  42. Mirchi, N.; Bissonnette, V.; Yilmaz, R.; Ledwos, N.; Winkler-Schwartz, A.; Del Maestro, R.F. The Virtual Operative Assistant: An explainable artificial intelligence tool for simulation-based training in surgery and medicine. PLoS ONE 2020, 15, e0229596. [Google Scholar] [CrossRef] [Green Version]
  43. Lamy, J.B.; Sekar, B.; Guezennec, G.; Bouaud, J.; Séroussi, B. Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artif. Intell. Med. 2019, 94, 42–53. [Google Scholar] [CrossRef]
  44. Alicioglu, G.; Sun, B. A survey of visual analytics for Explainable Artificial Intelligence methods. Comput. Graph. 2022, 102, 502–520. [Google Scholar] [CrossRef]
  45. Fellous, J.M.; Sapiro, G.; Rossi, A.; Mayberg, H.; Ferrante, M. Explainable artificial intelligence for neuroscience: Behavioral neurostimulation. Front. Neurosci. 2019, 13, 1346. [Google Scholar] [CrossRef] [Green Version]
  46. Vassiliades, A.; Bassiliades, N.; Patkos, T. Argumentation and explainable artificial intelligence: A survey. Knowl. Eng. Rev. 2021, 36, E5. [Google Scholar] [CrossRef]
  47. Chakraborty, D.; Ivan, C.; Amero, P.; Khan, M.; Rodriguez-Aguayo, C.; Başağaoğlu, H.; Lopez-Berestein, G. Explainable artificial intelligence reveals novel insight into tumor microenvironment conditions linked with better prognosis in patients with breast cancer. Cancers 2021, 13, 3450. [Google Scholar] [CrossRef]
  48. Wang, M.; Abdelfattah, S.; Moustafa, N.; Hu, J. Deep Gaussian mixture-hidden Markov model for classification of EEG signals. IEEE Trans. Emerg. Top. Comput. Intell. 2018, 2, 278–287. [Google Scholar] [CrossRef]
  49. Silvina, A.; Bowles, J.; Hall, P. On predicting the outcomes of chemotherapy treatments in breast cancer. In Proceedings of the Conference on Artificial Intelligence in Medicine in Europe, AIME 2019, Poznan, Poland, 26–29 June 2019; Springer: Cham, Switzerland, 2019; pp. 180–190. [Google Scholar]
  50. Pan, S.T.; Li, W.C. Fuzzy-HMM modeling for emotion detection using electrocardiogram signals. Asian J. Control 2020, 22, 2206–2216. [Google Scholar] [CrossRef]
  51. Boeker, M.; Riegler, M.A.; Hammer, H.L.; Halvorsen, P.; Fasmer, O.B.; Jakobsen, P. Diagnosing Schizophrenia from Activity Records using Hidden Markov Model Parameters. In Proceedings of the 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), Aveiro, Portugal, 7–9 June 2021; pp. 432–437. [Google Scholar]
  52. Monroy, N.F.; Altuve, M. Hidden Markov model-based heartbeat detector using different transformations of ECG and ABP signals. In Proceedings of the 15th International Symposium on Medical Information Processing and Analysis. International Society for Optics and Photonics, Medelin, Colombia, 6–8 November 2019; Volume 11330, p. 113300S. [Google Scholar]
  53. Fuse, T.; Kamiya, K. Statistical anomaly detection in human dynamics monitoring using a hierarchical dirichlet process hidden markov model. IEEE Trans. Intell. Transp. Syst. 2017, 18, 3083–3092. [Google Scholar] [CrossRef]
  54. Kwon, B.C.; Anand, V.; Severson, K.A.; Ghosh, S.; Sun, Z.; Frohnert, B.I.; Lundgren, M.; Ng, K. DPVis: Visual analytics with hidden markov models for disease progression pathways. IEEE Trans. Vis. Comput. Graph. 2020, 27, 3685–3700. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Emdadi, A.; Eslahchi, C. Auto-HMM-LMF: Feature selection based method for prediction of drug response via autoencoder and hidden Markov model. BMC Bioinform. 2021, 22, 33. [Google Scholar] [CrossRef]
  56. Huang, Q.; Cohen, D.; Komarzynski, S.; Li, X.M.; Innominato, P.; Lévi, F.; Finkenstädt, B. Hidden Markov models for monitoring circadian rhythmicity in telemetric activity data. J. R. Soc. Interface 2018, 15, 20170885. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Zhang, G.; Cai, B.; Zhang, A.; Stephen, J.M.; Wilson, T.W.; Calhoun, V.D.; Wang, Y.P. Estimating dynamic functional brain connectivity with a sparse hidden Markov model. IEEE Trans. Med. Imaging 2019, 39, 488–498. [Google Scholar] [CrossRef] [PubMed]
  58. Sang, X.; Xiao, W.; Zheng, H.; Yang, Y.; Liu, T. HMMPred: Accurate prediction of DNA-binding proteins based on HMM profiles and XGBoost feature selection. Comput. Math. Methods Med. 2020, 2020, 1384749. [Google Scholar] [CrossRef] [Green Version]
  59. Tago, K.; Jin, Q. Detection of anomaly health data by specifying latent factors with SEM and estimating hidden states with HMM. In Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018; pp. 137–141. [Google Scholar]
  60. Sabapathy, S.; Maruthu, S.; Krishnadhas, S.K.; Tamilarasan, A.K.; Raghavan, N. Competent and Affordable Rehabilitation Robots for Nervous System Disorders Powered with Dynamic CNN and HMM. Intell. Syst. Rehabil. Eng. 2022, 57–93. [Google Scholar] [CrossRef]
  61. Deters, J.K.; Rybarczyk, Y. Hidden Markov Model approach for the assessment of tele-rehabilitation exercises. Int. J. Artif. Intell. 2018, 16, 1–19. [Google Scholar]
  62. Rybarczyk, Y.; Kleine Deters, J.; Cointe, C.; Esparza, D. Smart web-based platform to support physical rehabilitation. Sensors 2018, 18, 1344. [Google Scholar] [CrossRef] [Green Version]
  63. Albert, M.V.; Sugianto, A.; Nickele, K.; Zavos, P.; Sindu, P.; Ali, M.; Kwon, S. Hidden Markov model-based activity recognition for toddlers. Physiol. Meas. 2020, 41, 025003. [Google Scholar] [CrossRef]
  64. Cheng, X.; Huang, B. CSI-Based Human Continuous Activity Recognition Using GMM–HMM. IEEE Sens. J. 2022, 22, 18709–18717. [Google Scholar] [CrossRef]
  65. Li, Q.; Gravina, R.; Li, Y.; Alsamhi, S.H.; Sun, F.; Fortino, G. Multi-user activity recognition: Challenges and opportunities. Inf. Fusion 2020, 63, 121–135. [Google Scholar] [CrossRef]
  66. Jobanputra, C.; Bavishi, J.; Doshi, N. Human activity recognition: A survey. Procedia Comput. Sci. 2019, 155, 698–703. [Google Scholar] [CrossRef]
  67. Iloga, S.; Bordat, A.; Le Kernec, J.; Romain, O. Human activity recognition based on acceleration data from smartphones using HMMs. IEEE Access 2021, 9, 139336–139351. [Google Scholar] [CrossRef]
  68. Martindale, C.F.; Sprager, S.; Eskofier, B.M. Hidden Markov model-based smart annotation for benchmark cyclic activity recognition database using wearables. Sensors 2019, 19, 1820. [Google Scholar] [CrossRef] [Green Version]
  69. Tran, S.N.; Ngo, T.S.; Zhang, Q.; Karunanithi, M. Mixed-dependency models for multi-resident activity recognition in smart homes. Multimed. Tools Appl. 2020, 79, 23445–23460. [Google Scholar] [CrossRef]
  70. Qi, J.; Yang, P.; Hanneghan, M.; Tang, S.; Zhou, B. A hybrid hierarchical framework for gym physical activity recognition and measurement using wearable sensors. IEEE Internet Things J. 2018, 6, 1384–1393. [Google Scholar] [CrossRef] [Green Version]
  71. Kulkarni, S.; Jadhav, S.; Adhikari, D. A survey on human group activity recognition by analysing person action from video sequences using machine learning techniques. In Optimization in Machine Learning and Applications. Algorithms for Intelligent Systems; Kulkarni, A., Satapathy, S., Eds.; Springer: Singapore, 2020; pp. 141–153. [Google Scholar]
  72. Tran, S.N.; Nguyen, D.; Ngo, T.S.; Vu, X.S.; Hoang, L.; Zhang, Q.; Karunanithi, M. On multi-resident activity recognition in ambient smart-homes. Artif. Intell. Rev. 2020, 53, 3929–3945. [Google Scholar] [CrossRef] [Green Version]
  73. Liu, H.; Hartmann, Y.; Schultz, T. Motion Units: Generalized sequence modeling of human activities for sensor-based activity recognition. In Proceedings of the 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; pp. 1506–1510. [Google Scholar]
  74. Kim, K.; Jalal, A.; Mahmood, M. Vision-based Human Activity recognition system using depth silhouettes: A Smart home system for monitoring the residents. J. Electr. Eng. Technol. 2019, 14, 2567–2573. [Google Scholar] [CrossRef]
  75. Concone, F.; Re, G.L.; Morana, M. A fog-based application for human activity recognition using personal smart devices. ACM Trans. Internet Technol. (TOIT) 2019, 19, 1–20. [Google Scholar] [CrossRef]
  76. Sung-Hyun, Y.; Thapa, K.; Kabir, M.H.; Hee-Chan, L. Log-Viterbi algorithm applied on second-order hidden Markov model for human activity recognition. Int. J. Distrib. Sens. Netw. 2018, 14, 1550147718772541. [Google Scholar] [CrossRef] [Green Version]
  77. Siddiqi, M.H.; Alruwaili, M.; Ali, A.; Alanazi, S.; Zeshan, F. Human activity recognition using Gaussian mixture hidden conditional random fields. Comput. Intell. Neurosci. 2019, 2019, 8590560. [Google Scholar] [CrossRef] [PubMed]
  78. Gravina, R.; Li, Q. Emotion-relevant activity recognition based on smart cushion using multi-sensor fusion. Inf. Fusion 2019, 48, 1–10. [Google Scholar] [CrossRef]
  79. Saini, R.; Kumar, P.; Roy, P.P.; Dogra, D.P. A novel framework of continuous human-activity recognition using Kinect. Neurocomputing 2018, 311, 99–111. [Google Scholar] [CrossRef]
  80. Calvo, A.F.; Holguin, G.A.; Medeiros, H. Human activity recognition using multi-modal data fusion. In Proceedings of the Iberoamerican Congress on Pattern Recognition, CIARP 2018, Madrid, Spain, 19–22 November 2018; Springer: Cham, Switzerland, 2018; pp. 946–953. [Google Scholar]
  81. Qingxin, X.; Wada, A.; Korpela, J.; Maekawa, T.; Namioka, Y. Unsupervised factory activity recognition with wearable sensors using process instruction information. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–23. [Google Scholar] [CrossRef] [Green Version]
  82. Zhao, J.; Basole, S.; Stamp, M. Malware Classification with GMM-HMM Models. arXiv 2021, arXiv:2103.02753. [Google Scholar]
  83. Zhang, F.; Han, S.; Gao, H.; Wang, T. A Gaussian mixture based hidden Markov model for motion recognition with 3D vision device. Comput. Electr. Eng. 2020, 83, 106603. [Google Scholar] [CrossRef]
  84. Tian, F.; Zhou, Q.; Yang, C. Gaussian mixture model-hidden Markov model based nonlinear equalizer for optical fiber transmission. Opt. Express 2020, 28, 9728–9737. [Google Scholar] [CrossRef] [PubMed]
  85. Li, Y.; Hu, B.; Niu, T.; Gao, S.; Yan, J.; Xie, K.; Ren, Z. GMM-HMM-Based Medium-and Long-Term Multi-Wind Farm Correlated Power Output Time Series Generation Method. IEEE Access 2021, 9, 90255–90267. [Google Scholar] [CrossRef]
  86. Cheng, X.; Huang, B.; Zong, J. Device-free Human Activity Recognition Based on GMM-HMM using Channel State Information. IEEE Access 2021, 9, 76592–76601. [Google Scholar] [CrossRef]
  87. Lim, C.L.P.; Woo, W.L.; Dlay, S.S.; Gao, B. Heartrate-dependent heartwave biometric identification with thresholding-based GMM–HMM methodology. IEEE Trans. Ind. Inform. 2018, 15, 45–53. [Google Scholar] [CrossRef] [Green Version]
  88. Manouchehri, N.; Bouguila, N. Integration of Multivariate Beta-based Hidden Markov Models and Support Vector Machines with Medical Applications. In Proceedings of the International FLAIRS Conference Proceedings, Hutchinson Island, FL, USA, 15–18 May 2022; Volume 35. [Google Scholar] [CrossRef]
  89. Oboh, B.S.; Bouguila, N. Unsupervised learning of finite mixtures using scaled dirichlet distribution and its application to software modules categorization. In Proceedings of the 2017 IEEE International Conference on Industrial Technology (ICIT), Toronto, ON, Canada, 22–25 March 2017; pp. 1085–1090. [Google Scholar] [CrossRef]
  90. Blei, D.M.; Kucukelbir, A.; McAuliffe, J.D. Variational inference: A review for statisticians. J. Am. Stat. Assoc. 2017, 112, 859–877. [Google Scholar] [CrossRef] [Green Version]
  91. Manouchehri, N.; Bouguila, N. A Non-parametric Bayesian Learning Model Using Accelerated Variational Inference on Multivariate Beta Mixture Models for Medical Applications. Int. J. Semant. Comput. 2022, 16, 449–469. [Google Scholar] [CrossRef]
  92. Manouchehri, N.; Bouguila, N.; Fan, W. Nonparametric variational learning of multivariate beta mixture models in medical applications. Int. J. Imaging Syst. Technol. 2021, 31, 128–140. [Google Scholar] [CrossRef]
  93. Manouchehri, N.; Kalra, M.; Bouguila, N. Online variational inference on finite multivariate Beta mixture models for medical applications. IET Image Process. 2021, 15, 1869–1882. [Google Scholar] [CrossRef]
  94. Manouchehri, N.; Bouguila, N.; Fan, W. Batch and online variational learning of hierarchical Dirichlet process mixtures of multivariate Beta distributions in medical applications. Pattern Anal. Appl. 2021, 24, 1731–1744. [Google Scholar] [CrossRef]
  95. Blei, D.M.; Jordan, M.I. Variational inference for Dirichlet process mixtures. Bayesian Anal. 2006, 1, 121–143. [Google Scholar] [CrossRef]
  96. Ranganath, R.; Tran, D.; Blei, D. Hierarchical variational models. In Proceedings of the 33rd International Conference on Machine Learning. PMLR, New York, NY, USA, 20–22 June 2016; pp. 324–333. [Google Scholar]
  97. Kurz, C.F.; Holle, R. Demand for Medical Care by the Elderly: A Nonparametric Variational Bayesian Mixture Approach. In Proceedings of the 2017 Imperial College Computing Student Workshop (ICCSW 2017), London, UK, 26–27 September 2017; Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Dagstuhl, Germany, 2018. [Google Scholar]
  98. Nathoo, F.; Lesperance, M.; Lawson, A.; Dean, C. Comparing variational Bayes with Markov chain Monte Carlo for Bayesian computation in neuroimaging. Stat. Methods Med. Res. 2013, 22, 398–423. [Google Scholar] [CrossRef]
  99. Rabiner, L.; Juang, B. An introduction to hidden Markov models. IEEE ASSP Mag. 1986, 3, 4–16. [Google Scholar] [CrossRef]
  100. Nguyen, H.; Azam, M.; Bouguila, N. Data Clustering using Variational Learning of Finite Scaled Dirichlet Mixture Models. In Proceedings of the 28th IEEE International Symposium on Industrial Electronics, ISIE 2019, Vancouver, BC, Canada, 12–14 June 2019; pp. 1391–1396. [Google Scholar] [CrossRef]
  101. Roggen, D.; Calatroni, A.; Rossi, M.; Holleczek, T.; Förster, K.; Tröster, G.; Lukowicz, P.; Bannach, D.; Pirkl, G.; Ferscha, A.; et al. Collecting complex activity datasets in highly rich networked sensor environments. In Proceedings of the 2010 Seventh International Conference on Networked Sensing Systems (INSS), Kassel, Germany, 15–18 June 2010; pp. 233–240. [Google Scholar]
  102. Sagha, H.; Digumarti, S.T.; Millán, J.D.R.; Chavarriaga, R.; Calatroni, A.; Roggen, D.; Tröster, G. Benchmarking classification techniques using the Opportunity human activity dataset. In Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics, Anchorage, AK, USA, 9–12 October 2011; pp. 36–40. [Google Scholar]
  103. Anguita, D.; Ghio, A.; Oneto, L.; Parra Perez, X.; Reyes Ortiz, J.L. A public domain dataset for human activity recognition using smartphones. In Proceedings of the 21th International European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 24–26 April 2013; pp. 437–442. [Google Scholar]
Table 1. Activities in the first and second runs.
Table 1. Activities in the first and second runs.
StandingWalkingLyingSitting
First run60%20%17%3%
Second run41%24%5%30%
Table 2. Number of missing values in the first dataset.
Table 2. Number of missing values in the first dataset.
FeatureNumbers of Nan in Each Feature
1, 2, 3454
4, 5, 6, 10, 11, 12, 28, 29, 3020
13, 14, 1592
19, 20, 211681
22, 23, 24311
34, 35, 3637,507
Table 3. Model performance evaluation results for the first dataset.
Table 3. Model performance evaluation results for the first dataset.
MethodAccuracyPrecisionRecallF1-Score
SD-HMM89.3386.5485.5186.02
D-HMM85.9686.2885.1885.72
GMM-HMM86.0883.5482.3782.95
Table 4. Model performance evaluation results for the second dataset.
Table 4. Model performance evaluation results for the second dataset.
MethodAccuracyPrecisionRecallF1-Score
SD-HMM87.1287.2885.4486.35
D-HMM85.9686.2885.1885.72
GMM-HMM85.1482.2483.7582.99
Table 5. Model performance evaluation results for the UCI HAR dataset.
Table 5. Model performance evaluation results for the UCI HAR dataset.
MethodAccuracyPrecisionRecallF1-Score
SD-HMM86.1785.0586.8385.93
D-HMM85.2284.3885.5784.97
GMM-HMM84.3182.4782.3382.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Manouchehri, N.; Bouguila, N. Human Activity Recognition with an HMM-Based Generative Model. Sensors 2023, 23, 1390. https://doi.org/10.3390/s23031390

AMA Style

Manouchehri N, Bouguila N. Human Activity Recognition with an HMM-Based Generative Model. Sensors. 2023; 23(3):1390. https://doi.org/10.3390/s23031390

Chicago/Turabian Style

Manouchehri, Narges, and Nizar Bouguila. 2023. "Human Activity Recognition with an HMM-Based Generative Model" Sensors 23, no. 3: 1390. https://doi.org/10.3390/s23031390

APA Style

Manouchehri, N., & Bouguila, N. (2023). Human Activity Recognition with an HMM-Based Generative Model. Sensors, 23(3), 1390. https://doi.org/10.3390/s23031390

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop