Next Article in Journal
Detecting Sensor Faults, Anomalies and Outliers in the Internet of Things: A Survey on the Challenges and Solutions
Next Article in Special Issue
Machine Learning Techniques for Assistive Robotics
Previous Article in Journal
Electromagnetic Susceptibility of Battery Management Systems’ ICs for Electric Vehicles: Experimental Study
Previous Article in Special Issue
Socially Assistive Robots for Older Adults and People with Autism: An Overview
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pattern Recognition Techniques for the Identification of Activities of Daily Living Using a Mobile Device Accelerometer

by
Ivan Miguel Pires
1,2,*,†,
Gonçalo Marques
2,†,
Nuno M. Garcia
2,†,
Francisco Flórez-Revuelta
3,†,
Maria Canavarro Teixeira
4,5,†,
Eftim Zdravevski
6,†,
Susanna Spinsante
7,† and
Miguel Coimbra
8,†
1
Computer Science Department, Polytechnic Institute of Viseu, 3504-510 Viseu, Portugal
2
Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal
3
Department of Computing Technology, University of Alicante, P.O. Box 99, E-03080 Alicante, Spain
4
UTC de Recursos Naturais e Desenvolvimento Sustentável, Polytechnique Institute of Castelo Branco, 6001-909 Castelo Branco, Portugal
5
CERNAS—Research Centre for Natural Resources, Environment and Society, Polytechnique Institute of Castelo Branco, 6001-909 Castelo Branco, Portugal
6
Faculty of Computer Science and Engineering, University Ss Cyril and Methodius, 1000 Skopje, Macedonia
7
Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
8
Instituto de Telecomunicações, Faculdade de Ciências da Universidade do Porto, 4169-007 Porto, Portugal
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2020, 9(3), 509; https://doi.org/10.3390/electronics9030509
Submission received: 5 February 2020 / Revised: 4 March 2020 / Accepted: 14 March 2020 / Published: 19 March 2020
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)

Abstract

:
The application of pattern recognition techniques to data collected from accelerometers available in off-the-shelf devices, such as smartphones, allows for the automatic recognition of activities of daily living (ADLs). This data can be used later to create systems that monitor the behaviors of their users. The main contribution of this paper is to use artificial neural networks (ANN) for the recognition of ADLs with the data acquired from the sensors available in mobile devices. Firstly, before ANN training, the mobile device is used for data collection. After training, mobile devices are used to apply an ANN previously trained for the ADLs’ identification on a less restrictive computational platform. The motivation is to verify whether the overfitting problem can be solved using only the accelerometer data, which also requires less computational resources and reduces the energy expenditure of the mobile device when compared with the use of multiple sensors. This paper presents a method based on ANN for the recognition of a defined set of ADLs. It provides a comparative study of different implementations of ANN to choose the most appropriate method for ADLs identification. The results show the accuracy of 85.89% using deep neural networks (DNN).

1. Introduction

The accelerometer sensor commonly available in off-the-shelf mobile devices [1,2] measures the acceleration of the movement of the mobile device, enabling the recognition of activities of daily living (ADLs) [3]. After the development and conception of a system architecture for the identification of ADLs, it could be, for example, integrated into the creation of a personal digital life coach [4], essential for the monitoring of elderly persons and persons with impairments, or for the training of certain lifestyles. The accelerometer enables the recognition of several motion activities, including running, walking on stairs, walking, and standing. Following the previous research studies [5,6,7,8], several steps are incorporated in the recognition of ADLs, including data acquisition, data processing, data cleaning, feature extraction, data fusion, and data classification.
Several authors studied the automatic recognition of ADLs [9,10,11,12,13,14]; artificial neural networks (ANN) were widely used [15,16]. The accelerometer was used for the identification of ADLs while comparing some implementations of ANN with different frameworks, such as the multilayer perception (MLP) with Neuroph [17] and Encog [18] frameworks, and the deep neural network (DNN) method with the DeepLearning4j [19] framework. The authors aimed to find the model that achieves the best accuracy in recognition of running, walking, walking downstairs, walking upstairs, and standing. These five ADLs were selected based on the literature review, wherein different studies reported reliable results for these activities, to allow the comparison with the method implemented in this research. The use of data acquired from the accelerometer sensor fused with the data retrieved from the magnetometer and gyroscope sensors is available in the literature [20]. This paper attempts to use different datasets of features with only the accelerometer data that should be analyzed to define the best combination of features. The main objective of this paper is to explore the use of different sets of features obtained using the accelerometer with the same datasets acquired for the previous study. After the comparison performed in [20] about the use of data fusion from the data acquired from the accelerometer, magnetometer, and gyroscope sensors, we verified that one of the major problems is related to the overfitting obtained during the training phase of the ANN.
The frameworks presented in this study were used in the study [20] to verify which the best methods are for the recognition of ADLs using the sensors available in the mobile device. Despite the disadvantages of achieving poor accuracy, MLP implemented with Neuroph and Encog frameworks still have the benefit of the adaption of the low resources of the mobile devices, because these methods need less power processing and memory capabilities than the DNN method implemented with DeepLearning4j. Therefore, the primary motivation of this paper is to verify whether the overfitting problem can be solved using only the accelerometer data. Additionally, the authors aim to verify the accuracy of the proposed method using only one sensor and a smaller number of features for the training of the ANN, in order to use fewer computational resources and reduce the energy expenditure of the mobile device when compared with the use of multiple sensors.
Thus, the main contribution of this paper is to perform a comparison of three different architectures of ANN methods using only the accelerometer data to verify whether the overfitting problems are avoided. This paper presents the use of ANN for ADLs recognition with the data acquired from mobile sensors. In addition, it also presents a comparative study of different implementations to find the most accurate method.
This paper is structured as follows: Section 2 presents work related to the identification of ADLs using the accelerometer sensor. Section 3 describes the steps used for the recognition of ADLs using the accelerometer sensor. Section 4 presents the discussion and results obtained during the research. Finally, Section 5 consists of the presentation of the conclusions regarding the results obtained.

2. Related Work

Several methods can be used for the automatic classification of ADLs with the data acquired from the accelerometer sensor available in the off-the-shelf mobile devices [3,21]. Numerous studies in this field are presented in the literature. Therefore, it is not possible to include them all in this document. Table 1 presents an analysis of 43 studies conducted on ADLs recognition using accelerometer data. The studies were selected according to the following criteria: (1) use of smartphones for data collection; (2) the features being clearly defined; (3) the methods being clearly defined; (4) the accuracy levels being presented. These studies are available in multiple databases such as MDPI, Springer, and ACM collected using the Google Scholar portal. Still, the vast majority have been found in the IEEE Xplore library. Following the different works analyzed, the methods that reported the best accuracies for the recognition between 1 and 8 ADLs are the different types of ANN, including MLP and DNN methods, using statistical features.
The studies presented in Table 1 reported that the most recognized ADLs with reported average accuracies high than 85% are walking, standing, walking upstairs, walking downstairs, and running. Therefore, these activities are considered in the proposed method. In total, 31 studies use smartphones located in the user’s pocket. However, some studies also located the smartphone around the waist, forearm, and wrist. Moreover, some studies combine the use of smartphones with other wearable sensors.
The ADLs recognition indicates an average accuracy between 87.93% and 88.80% using different methods. In addition, the ADLs reporting better accuracies in the analyzed studies are walking, standing, walking upstairs, walking downstairs, and running. In total, 91% (N = 39) of the analyzed papers support walking recognition reporting an average accuracy of 88.80%. The standing activity is included in 29 studies which represent 67% of our literature review and provide an average accuracy of 88.65%. Walking upstairs and downstairs activities are supported by 25 (58%) and 23 (53%) studies, respectively. The first reports an average accuracy of 85.88% and the second reports an average accuracy of 85.5%. Finally, the running activity is assessed by 42% (N = 18) of the evaluated studies and reports 87.93% average accuracy.
Regarding the ADLs recognized in the analyzed studies, the mean, standard deviation, maximum, minimum, correlation, variance, and median are the most used features in the literature. In total, 86% (N = 37) of the analyzed papers use the mean feature, reporting an average accuracy of 85.74%. The standard deviation feature is included in 30, representing 70% of the evaluated papers, and provides an average accuracy of 86.70%. The maximum and minimum values are included in 19 (44%) and 17 (40%) studies, respectively. The maximum feature reports an average accuracy of 87.47%, and the minimum feature reports 88.50%. The median and correlation features are used in 10 studies (23%) each and report average accuracies of 87.44 % and 91.52%, respectively. Eight studies include the variance as a feature for ADLs recognition reporting and average accuracy of 90.15%.
The implementations that reported an accuracy higher than 88% are ANN, multi-column bidirectional long short-term memory (MBLSTM), Bayesian network, and random forest methods, reporting an average accuracy between 88.65% and 91.29%. In total, 40% (N = 17) of the analyzed papers use ANN methods reporting the average accuracy of 91.29%. Eight studies propose the random forest for ADLs recognition, reporting 90.53% average accuracy. The MBLSTM method provides 89,4% average accuracy, and the Bayesian Network is used by three studies reporting an average accuracy of 88.65%.
In summary, the number of ADLs recognized with the different methods used, as well as the particular dataset, influenced the accuracies reported. The identification of a lesser amount of ADLs reported the best results in the literature. Following the ADLs and methods that reported the best results, our research is focused on the implementation of ANN for the recognition of five ADLs, including standing, walking, running, and walking upstairs and downstairs. These ADLs were selected for our implementation because they are the most recognized in the literature, reporting reliable accuracies.

3. Methods

Based on the literature combined with the proposed system architecture for the recognition of ADLs in [5,6,7,64], the methods that should be defined for each module of the proposed system, are as follows: data acquisition, data processing, data fusion, and data classification. The data processing methods include data cleaning and feature extraction methods. Additionally, since this study only uses a single sensor, i.e., the accelerometer, the data fusion methods are not necessary.
Figure 1 represents the methodology and system architecture proposed by the authors in this paper. The data acquisition is performed using the accelerometer sensor available in commonly used, off-the-shelf mobile devices with a mobile application during running, walking, standing, and walking upstairs and walking downstairs activities. This acquired data is processed using data cleaning and feature extraction methods. After data processing, MLP and DNN methods are used for ADLs identification.

3.1. Data Acquisition

This study was based on the data previously acquired for the study [20], which consists on the acquisition of data related to five ADLs, such as standing (Figure 2), walking (Figure 3), running (Figure 4), walking upstairs (Figure 5), and walking downstairs (Figure 6). The data used for this study are available in a public repository [65] previously used in [20]. A visual presentation of the data collected in each activity is presented in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
The dataset comprehends more than 2000 samples with five seconds of accelerometer data for each ADL. A mobile device placed on the front pocket of the user’s pants was used for data acquisition. The data were acquired in a controlled environment, where, before the start of the data collection, the user had to select the ADLs that he/she would perform. Every five seconds of data were acquired every five minutes. When the user planed to perform another ADLs, he/she should stop the data collection and change the ADLs selected in the mobile application used.
Twenty-five individuals were selected for the experiments that always used the same mobile device; i.e., an BQ Aquaris 5.7 smartphone [66]. These individuals were aged between 16 and 60 years old, composed of five teenagers and five people between 40 and 60 years old, and the remaining were randomly selected. Several environmental constraints were uncontrolled during the data acquisition, but we had control of the procedures related to the labeling of the different samples and the positioning of the device. As we acquired five seconds of data every five minutes, the individuals spent around 7 h performing each ADL collected by the mobile device. In total, each individual spent around 35 h for the data acquisition.

3.2. Data Processing

This study comprehends the use of accelerometer data with a low-pass filter application to clean the data [67,68]. It consists of the first step of the data processing, and this module is finalized with the extraction of the different statistical features. They are the same as the ones described in [20], but only provided by the accelerometer data, including the five largest distances between the maximum peaks; the mean, standard deviation, variance, and median of the maximum peaks; and the standard deviation, mean, maximum and minimum values, variance, and median of the raw signal.

3.3. Data Classification

For the same purpose as [20], but only with the accelerometer data, this study aimed to recognize the five proposed ADLs being used, based on the datasets presented in Figure 7. The granularity of the features included varies between the datasets 1–5; i.e., the dataset 5 contains all inputs of datasets 1 to 5.
For this purpose, we used three different implementations with distinct configurations using free software available online. The application of the MLP method takes into account the same settings, but two different implementations were performed using the Neuroph [17] and Encog [18] frameworks. Additionally, we used the DeepLearning4j framework for the application of a DNN method [19]. These are Java-based frameworks that allow for the implementation of machine learning methods with the adaptation to our data. All configurations of the frameworks implemented the sigmoid function as the activation function, a maximum of 4 × 106 iterations and backpropagation [69]. However, the learning rates applied in the MLP implementations and the DNN method are different; the value was 0.6 for MLP implementations and 0.1 for the DNN method. The MLP implementations also included the momentum value equal to 0.4. Regarding the numbers of hidden layers, the MLP methods did not include hidden layers, but the DNN method implemented three hidden layers. The DNN method also included the Xavier function [70] as a weight/initialisation function, a seed value equal to 6, and L2 regularization [71]. After different tests and adjustments, we verified that these parameters reported more consistent results with the data acquired than others, suggesting its implementation in the developed method.
Additionally, the data classification was tested with normalized and non-normalized data, implemented the min-max normalization for the implementations of the MLP method, and the normalization with mean and standard deviation for the implementation of the DNN method.

4. Results and Discussion

As the different implementations reported the existence of overfitting during the creation of the different ANNs, the early-stop training technique was implemented, stopping the training at a limit of 4 × 106 iterations. Thus, the results reported are presented in Figure 8 and Figure 9 for non-normalized and normalized data, respectively.
After the implementation with the Neuroph framework, the results obtained had very low accuracies with normalized (between 20% and 30%) and non-normalized (between 20% and 40%) data. Following the implementation with the Encog framework, the results obtained had a very low accuracy (between 20% and 40%) with data without normalization, wherein, as excepted, the neural networks trained with the dataset 5 reported a certainty around 75%. When the data were normalized, the accuracy of the implemented method was always between 10% and 40%.
Next, for the implementation with the DeepLearning4j framework, the results obtained are higher than 70%, but, for data without normalization, the results reported with the dataset 5 have an accuracy lower than 30%, and for the normalized data, the results decrease with a reduced number of features—dataset 5 reported the best results.
There are two types of normalization implemented with the data acquired, including the one based on mean and standard deviation and the other one based on min-max. The accuracy reported for non-normalized data is better than the accuracy reported for data with min-max normalization. However, the results with all defined datasets increase with the application of L2 regularization and normalization with mean and standard deviation.
Table 2 shows the maximum accuracies obtained with the MLP method with Neuroph and Encog frameworks and the DNN method with the DeepLearning4j framework. The DeepLearning4j framework reported the best accuracy, and the results obtained by Neuroph and Encog frameworks are not satisfactory.
Analyzing the results presented in Table 2, Neuroph framework always reported bad results with an accuracy of 32.02% using dataset 5 using non-normalized data, and an accuracy of 24.03% with dataset 3 using normalized data. Among the frameworks used in this study, the Neuroph framework reported the worst results, because its architecture is not adapted for this type of data, or because it needs a large number of samples for the training of the ANN. The Neuroph framework reported better results with a large number of inputs for the ANN.
The use of the Encog framework slightly improved the results obtained with normalized data, reporting an accuracy of 37.07% using the dataset 2. However, Encog framework reported a high accuracy with the use of non-normalized data (74.45%). In contrast with the Neuroph framework, it was verified that the best accuracies were attained by the implementations with a smaller number of inputs.
The major problem of the implementation of DeepLearning4j framework is the resource consumption, where the performance is affected. However, the performance is only bad in the training phase. The final implementation the ANN provides reliable results after being trained. DeepLearning4j always reported high accuracy in the results with a large number of inputs—the results obtained were 80.35% accurate with non-normalized data, and 85.89% with normalized data.
The results recommend the DNN method with all features extracted from the acquired data as the most reliable method for the identification of ADLs. However, before its implementation, the data should be normalized with the mean and standard deviation method, and the L2 regularization method should be applied. Based on the tests performed with the acquired data, the results obtained are always higher than those reported other ways. The results obtained have a precision value of 86.21%, a recall value of 85.89%, and an F1 score value of 86.05%.
In addition to the analysis, the confusion matrixes for the different frameworks were made, and are presented in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. By analyzing Table 3, it is possible to verify that the number of true positive values in recognition of walking upstairs, walking downstairs, and standing, is meager, proving a high number of false negatives and true negatives using the MLP method with the Neuroph framework based on non-normalized data. Next, Table 4 shows that the number of true positive values in recognition of all ADLs is meager, verifying a high number of false negatives using the MLP method with the Neuroph framework based on normalized data.
Following the analysis of Table 5, it was verified that only running is recognized by the MLP method with the Encog framework based on non-normalized data, presenting a high number of false negative values. In contrast, based on the implementation of the MLP method with the Encog framework based on normalized data, walking is always correctly recognized with 2000 true positive values, but it has 7999 false negative values. The high number of false negative values is also verified in the other ADLs, and the true negative and false positive values are too high.
Based on the use of the DNN method with the DeepLearning4j framework based on non-normalized data, the number of true negatives is only low in recognition of standing activity, reporting a high number of false positive values. However, the standing activity also reported a high number of true positive values, while the other ADLs reported high false negative values. Finally, with the use of the DNN method with the DeepLearning4j framework based on normalized data, the true positive and true negative values are high in all ADLs recognized.
This paper highlights the results obtained with different datasets using only the accelerometer data for the creation of a part of the method for the automatic recognition of several ADLs, including running, walking, walking upstairs and downstairs, and standing. The study also compares the results obtained with different types of ANNs, requiring low processing for the correct implementation in mobile devices.
The low accuracies verified with Neuroph and Encog frameworks are related to the fact that the ANNs created are probably overfitted. The possible solutions may be the acquisition of more data, the application of L2 regularization, the implementation of dropout regularization, the early stopping of the training, the use of the batch normalization, or the use of a minor number of features in the ANN. The DNN method with L2 regularization and normalized data reported the best results. The influence of the amount of the maximum iterations is not substantial, but, in some cases, it increases the accuracy of the ANN.
During the data acquisition, several constraints may exist, collecting noised values of sensors’ data. Commonly, the accelerometer is available in all mobile devices, and the implementation of the system architecture for the recognition of ADLs and its environments can be possible with all devices in the market. However, these are multitasking devices, and sometimes the data cannot be collected or is incorrectly collected, providing low accuracy on the recognition of the ADL. Another example consists of the positioning of the mobile device because the data is not correctly acquired during a call. Memory and power processing are profoundly affected by the performance of different tasks at the same time.
The main focus of this research was to explore the use of the accelerometer sensor for ADLs recognition. We found that the accuracy obtained is in line with the previous results in the literature [20]. This study reports an accuracy of 85.89% in the recognition of five ADLs. Furthermore, using the DNN method, according to Table 2, the results obtained with the implementation of our methods are not directly comparable, because the datasets and source code of the implementation used by other authors are not publicly available. A comparison would be essential to proving the reliability of our method. Thus, considering the average of the accuracies reported by ANNs and their variants shared in the literature, the results ( 92 % ± 6.55 % ) present better accuracies than those obtained in this study. However, taking into account only the average of the accuracies reported by the projects that identified more than one ADL, the results reported by other studies ( 90 % ± 6.60 % ) are slightly equivalent to those published by our research. Finally, considering only the studies that recognized five or more ADLs, the results reported by these studies ( 90 % ± 6.63 % ) are equivalent to the results obtained with this work.
In conclusion, the accuracy of the ADLs recognition depends on several variables, including the conditions for data acquisition, conditions for data processing, and the use of lightweight methods (local processing) or server-side processing [72]. As presented in [72], it may cause failures on the data acquisition, collect incorrect data, or claim the nonexistence of data in some instances, causing improper recognition of ADL. To avoid some effects of inaccurate data, we implemented data cleaning methods, and data imputation methods may be useful for reducing the impacts of unavailable data. The main possible problems are related to the incorrect or nonexistent recognition of ADLs performed.
The main limitations of this study are related to the use of mobile devices for data acquisition. On the one hand, there is a lack of scientific evidence and research on the definition of the best position at which the mobile device must be located. On the other hand, other constraints during the data acquisition are related to the frequency of the data acquisition because it depends on the different processes running in the mobile device. During the experimental phase, the mobile application developed for the data acquisition writes the data in text files; the latency to write in the text files also influences the data acquisition and processing. However, the use of local processing and lightweight methods reduces the lag of the connection with the network, but the different methods must always be optimized.
Taking into account the results obtained in [43], the number of ADLs recognized, the number of records for each ADL, and the features extracted are different in our study. Consequently, the accuracy obtained in our research with the DNN method is higher than the results reported by the authors of [43]. We expect that in similar conditions of study [43], we obtain the same or better results. Nevertheless, it will be impossible to test, as the authors [43] did not make their data publicly available.

5. Conclusions

This paper presents several approaches that use the accelerometer sensor commonly available in mobile devices for ADLs recognition. Furthermore, the main contribution of this document is to offer a comparative study of different ANN implementations to find the most appropriate method for ADLs identification using only accelerometer data. The comparative study performed in this research recommends the use of DNN for the recognition of ADLs. We proposed the implementation of the trained DNN method in the system for the identification of the ADLs using only the accelerometer sensor available in off-the-shelf mobile devices, applied with the DeepLearning4j framework. The results show the accuracy of 85.89%, a precision value of 86.21%, a recall value of 85.89%, and an F1 score value of 86.05% using the five largest distances between the maximum peaks; the mean, standard deviation, variance, and median of the maximum peaks; and the standard deviation, mean, maximum and minimum values, variance, and median of the raw signal as features.
Nevertheless, this study has some limitations concerning the use of mobile devices. The lack of research on the best position of the mobile device for data collection is a relevant question. Moreover, the energy expenditure concerning the processing power related to the frequency of data acquisition is also a significant challenge that the authors have addressed by using only accelerometer data. The authors verified that the overfitting problem is not avoided, but the results obtained using only accelerometer data are similar to those obtained with the use of multiple sensors. Additionally, the authors found that using only one sensor and a smaller number of features for the train of the ANN does not significantly decrease the accuracy of the results obtained. Still, it uses less computational resources and promotes the energy consumption of the mobile device when compared with the use of multiple sensors.
As future work, other implementation settings regarding different machine learning methods will be studied. These implementations will include the design of other types of data classification methods, e.g., ensemble learning methods and decision trees, to verify the existence of different approaches with better results using our dataset. The dataset is publicly available, and other authors can use and compare it with their methods.

Author Contributions

Conceptualization, methodology, software, validation, formal analysis, investigation, writing—original draft preparation, and writing—review and editing: I.M.P., G.M., N.M.G., F.F.-R., M.C.T., E.Z., S.S., and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by FCT/MCTES through national funds, and when applicable, co-funded EU funds under the project UIDB/EEA/50008/2020 (Este trabalho é financiado pela FCT/MCTES através de fundos nacionais e quando aplicável cofinanciado por fundos comunitários no âmbito do projeto UIDB/EEA/50008/2020).

Acknowledgments

This work is funded by FCT/MCTES through national funds, and when applicable, co-funded EU funds under the project UIDB/EEA/50008/2020 (Este trabalho é financiado pela FCT/MCTES através de fundos nacionais e quando aplicável cofinanciado por fundos comunitários no âmbito do projeto UIDB/EEA/50008/2020). This article/publication is based on work from COST Action IC1303: AAPELE—Architectures, Algorithms and Protocols for Enhanced Living Environments and COST Action CA16226; SHELD-ON—Indoor living space improvement: Smart Habitat for the Elderly, supported by COST (European Cooperation in Science and Technology). More information at www.cost.eu. COST (European Cooperation in Science and Technology) is a funding agency for research and innovation networks. Our Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Salazar, L.H.A.; Lacerda, T.; Nunes, J.V.; von Wangenheim, C.G. A systematic literature review on usability heuristics for mobile phones. Int. J. Mob. Hum. Comput. Interact. (IJMHCI) 2013, 5, 50–61. [Google Scholar] [CrossRef] [Green Version]
  2. Marques, G. Ambient Assisted Living and Internet of Things. In Harnessing the Internet of Everything (IoE) for Accelerated Innovation Opportunities; IGI Global: Hershey, PA, USA, 2019; p. 100. [Google Scholar] [CrossRef] [Green Version]
  3. Pedretti, L.W.; Early, M.B. Occupational Therapy: Practice Skills for Physical Dysfunction; Mosby: St. Louis, MO, USA, 2001. [Google Scholar]
  4. Garcia, N.M. A roadmap to the design of a personal digital life coach. In International Conference on ICT Innovations; Springer: Berlin, Germany, 2015; pp. 21–27. [Google Scholar]
  5. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F. From Data Acquisition to Data Fusion: A Comprehensive Review and a Roadmap for the Identification of Activities of Daily Living Using Mobile Devices. Sensors 2016, 16, 184. [Google Scholar] [CrossRef] [PubMed]
  6. Pires, I.M.; Garcia, N.M.; Flórez-Revuelta, F. Multi-sensor data fusion techniques for the identification of activities of daily living using mobile devices. In Proc European Conf. on Machine Learning and Principles and Practice of Knowledge Discovery in Databases - ECML/PKDD; CEUR: Porto, Portugal, 2015. [Google Scholar]
  7. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F. Identification of Activities of Daily Living Using Sensors Available in off-the-shelf Mobile Devices: Research and Hypothesis. In Ambient Intelligence—Software and Applications—7th International Symposium on Ambient Intelligence (ISAmI 2016); Lindgren, H., De Paz, J.F., Novais, P., Fernández-Caballero, A., Yoe, H., Jiménez Ramírez, A., Villarrubia, G., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 121–130. [Google Scholar]
  8. Marques, G.; Pitarma, R.; Garcia, N.M.; Pombo, N. Internet of Things Architectures, Technologies, Applications, Challenges, and Future Directions for Enhanced Living Environments and Healthcare Systems: A Review. Electronics 2019, 8, 81. [Google Scholar] [CrossRef] [Green Version]
  9. Akhoundi, M.A.A.; Valavi, E. Multi-sensor fuzzy data fusion using sensors with different characteristics. arXiv 2010, arXiv:1010.6096. [Google Scholar]
  10. Banos, O.; Damas, M.; Pomares, H.; Rojas, I. On the Use of Sensor Fusion to Reduce the Impact of Rotational and Additive Noise in Human Activity Recognition. Sensors 2012, 12, 8039–8054. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Dernbach, S.; Das, B.; Krishnan, N.C.; Thomas, B.L.; Cook, D.J. Simple and Complex Activity Recognition through Smart Phones. In Proceedings of the 2012 Eighth International Conference on Intelligent Environments, Guanajuato, Mexico, 26–29 June 2012; pp. 214–221. [Google Scholar] [CrossRef] [Green Version]
  12. Hsu, Y.; Chen, K.; Yang, J.; Jaw, F. Smartphone-based fall detection algorithm using feature extraction. In Proceedings of the 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Datong, China, 15–17 October 2016; pp. 1535–1540. [Google Scholar] [CrossRef]
  13. Paul, P.; George, T. An effective approach for human activity recognition on smartphone. In Proceedings of the 2015 IEEE International Conference on Engineering and Technology (ICETECH), Coimbatore, India, 20 March 2015; pp. 1–3. [Google Scholar] [CrossRef]
  14. Shen, C.; Chen, Y.; Yang, G. On motion-sensor behavior analysis for human-activity recognition via smartphones. In Proceedings of the 2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Sendai, Japan, 29 February–2 March 2016; pp. 1–6. [Google Scholar] [CrossRef]
  15. Doya, K.; Wang, D. Exciting Time for Neural Networks. Neural Netw. 2015, 61, xv–xvi. [Google Scholar] [CrossRef]
  16. Wang, D. Pattern recognition: Neural networks in perspective. IEEE Expert 1993, 8, 52–60. [Google Scholar] [CrossRef]
  17. Neuroph. 2019. Available online: http://neuroph.sourceforge.net/ (accessed on 20 March 2019).
  18. Encog. 2017. Available online: http://www.heatonresearch.com/encog/ (accessed on 20 March 2019).
  19. Deeplearning4j. 2019. Available online: https://deeplearning4j.org/ (accessed on 20 March 2019).
  20. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F.; Spinsante, S.; Teixeira, M.C. Identification of activities of daily living through data fusion on motion and magnetic sensors embedded on mobile devices. Pervasive Mob. Comput. 2018, 47, 78–93. [Google Scholar] [CrossRef]
  21. Zdravevski, E.; Lameski, P.; Trajkovik, V.; Kulakov, A.; Chorbev, I.; Goleva, R.; Pombo, N.; Garcia, N. Improving Activity Recognition Accuracy in Ambient-Assisted Living Systems by Automated Feature Engineering. IEEE Access 2017, 5, 5262–5280. [Google Scholar] [CrossRef]
  22. Gadebe, M.L.; Kogeda, O.P.; Ojo, S.O. Personalized Real Time Human Activity Recognition. In Proceedings of the 2018 5th International Conference on Soft Computing Machine Intelligence (ISCMI), Nairobi, Kenya, 21–22 November 2018; pp. 147–154. [Google Scholar] [CrossRef]
  23. Naved, M.M.A.; Uddin, M.Y.S. Adaptive Notifications Generation for Smartphone Users Based on their Physical Activities. In Proceedings of the 2018 5th International Conference on Networking, Systems and Security (NSysS), Dhaka, Bangladesh, 18–20 December 2018; pp. 1–9. [Google Scholar] [CrossRef]
  24. RoyChowdhury, I.; Saha, J.; Chowdhury, C. Detailed Activity Recognition with Smartphones. In Proceedings of the 2018 Fifth International Conference on Emerging Applications of Information Technology (EAIT), Kolkata, India, 12–13 January 2018; pp. 1–4. [Google Scholar] [CrossRef]
  25. Sukor, A.S.A.; Zakaria, A.; Rahim, N.A. Activity recognition using accelerometer sensor and machine learning classifiers. In Proceedings of the 2018 IEEE 14th International Colloquium on Signal Processing Its Applications (CSPA), Batu Feringghi, Malaysia, 9–10 March 2018; pp. 233–238. [Google Scholar] [CrossRef]
  26. Yan, N.; Chen, J.; Yu, T. A Feature Set for the Similar Activity Recognition Using Smartphone. In Proceedings of the 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP), Hangzhou, China, 18–20 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
  27. Lavanya, B.; Gayathri, G.S. Exploration and Deduction of Sensor-Based Human Activity Recognition System of Smart-Phone Data. In Proceedings of the 2017 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Coimbatore, India, 14–16 December 2017; pp. 1–5. [Google Scholar] [CrossRef]
  28. Li, G.; Huang, L.; Xu, H. iWalk: Let Your Smartphone Remember You. In Proceedings of the 2017 4th International Conference on Information Science and Control Engineering (ICISCE), Changsha, China, 21–23 July 2017; pp. 414–418. [Google Scholar] [CrossRef]
  29. Tsinganos, P.; Skodras, A. A smartphone-based fall detection system for the elderly. In Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis, Ljubljana, Slovenia, 18–20 September 2017; pp. 53–58. [Google Scholar] [CrossRef]
  30. Wannenburg, J.; Malekian, R. Physical Activity Recognition From Smartphone Accelerometer Data for User Context Awareness Sensing. IEEE Trans. Syst. Man, Cybern. Syst. 2017, 47, 3142–3149. [Google Scholar] [CrossRef]
  31. Cardoso, N.; Madureira, J.; Pereira, N. Smartphone-based transport mode detection for elderly care. In Proceedings of the 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–17 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
  32. Dangu Elu Beily, M.; Badjowawo, M.D.; Bekak, D.O.; Dana, S. A sensor based on recognition activities using smartphone. In Proceedings of the 2016 International Seminar on Intelligent Technology and Its Applications (ISITIA), Lombok, Indonesia, 28–30 June 2016; pp. 393–398. [Google Scholar] [CrossRef]
  33. Sen, S.; Rachuri, K.K.; Mukherji, A.; Misra, A. Did you take a break today? Detecting playing foosball using your smartwatch. In Proceedings of the 2016 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), Sydney, Australia, 14–18 March 2016; pp. 1–6. [Google Scholar] [CrossRef]
  34. Weiss, G.M.; Lockhart, J.W.; Pulickal, T.T.; McHugh, P.T.; Ronan, I.H.; Timko, J.L. Actitracker: A Smartphone-Based Activity Recognition System for Improving Health and Well-Being. In Proceedings of the 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA), Montreal, QC, Canada, 17–19 October 2016; pp. 682–688. [Google Scholar] [CrossRef]
  35. Wang, C.; Xu, Y.; Zhang, J.; Yu, W. SW-HMM: A Method for Evaluating Confidence of Smartphone-Based Activity Recognition. In Proceedings of the 2016 IEEE Trustcom/BigDataSE/ISPA, Tianjin, China, 23–26 August 2016; pp. 2086–2091. [Google Scholar] [CrossRef]
  36. Guo, H.; Chen, L.; Chen, G.; Lv, M. An Interpretable Orientation and Placement Invariant Approach for Smartphone Based Activity Recognition. In Proceedings of the 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), Beijing, China, 10–14 August 2015; pp. 143–150. [Google Scholar] [CrossRef]
  37. Kim, Y.; Kang, B.; Kim, D. Hidden Markov Model Ensemble for Activity Recognition Using Tri-Axis Accelerometer. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; pp. 3036–3041. [Google Scholar] [CrossRef]
  38. Kwon, Y.; Kang, K.; Bae, C. Analysis and evaluation of smartphone-based human activity recognition using a neural network approach. In Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–16 July 2015; pp. 1–5. [Google Scholar] [CrossRef]
  39. Ling, Y.; Wang, H. Unsupervised Human Activity Segmentation Applying Smartphone Sensor for Healthcare. In Proceedings of the 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), Beijing, China, 10–14 August 2015; pp. 1730–1734. [Google Scholar] [CrossRef]
  40. Torres-Huitzil, C.; Nuno-Maganda, M. Robust smartphone-based human activity recognition using a tri-axial accelerometer. In Proceedings of the 2015 IEEE 6th Latin American Symposium on Circuits Systems (LASCAS), Montevideo, Uruguay, 24–27 February 2015; pp. 1–4. [Google Scholar] [CrossRef]
  41. Wang, C.; Zhang, W. Activity Recognition Based on Smartphone and Dual-Tree Complex Wavelet Transform. In Proceedings of the 2015 8th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 12–13 December 2015; Volume 2, pp. 267–270. [Google Scholar] [CrossRef]
  42. Zainudin, M.N.S.; Sulaiman, M.N.; Mustapha, N.; Perumal, T. Activity recognition based on accelerometer sensor using combinational classifiers. In Proceedings of the 2015 IEEE Conference on Open Systems (ICOS), Bandar Melaka, Malaysia, 24–26 August 2015; pp. 68–73. [Google Scholar] [CrossRef]
  43. Zhang, L.; Wu, X.; Luo, D. Real-Time Activity Recognition on Smartphones Using Deep Neural Networks. In Proceedings of the 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), Beijing, China, 10–14 August 2015; pp. 1236–1242. [Google Scholar] [CrossRef]
  44. Aguiar, B.; Silva, J.; Rocha, T.; Carneiro, S.; Sousa, I. Monitoring physical activity and energy expenditure with smartphones. In Proceedings of the IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), Hong Kong, China, 5–7 January 2014; pp. 664–667. [Google Scholar] [CrossRef]
  45. Fahim, M.; Lee, S.; Yoon, Y. SUPAR: Smartphone as a ubiquitous physical activity recognizer for u-healthcare services. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 3666–3669. [Google Scholar] [CrossRef]
  46. Khalifa, S.; Hassan, M.; Seneviratne, A. Feature selection for floor-changing activity recognition in multi-floor pedestrian navigation. In Proceedings of the 2014 Seventh International Conference on Mobile Computing and Ubiquitous Networking (ICMU), Singapore, 6–8 January 2014; pp. 1–6. [Google Scholar] [CrossRef]
  47. Duarte, F.; Lourenço, A.; Abrantes, A. Activity classification using a smartphone. In Proceedings of the 2013 IEEE 15th International Conference on e-Health Networking, Applications and Services (Healthcom 2013), Lisbon, Portugal, 9–12 October 2013; pp. 549–553. [Google Scholar] [CrossRef]
  48. Fan, L.; Wang, Z.; Wang, H. Human Activity Recognition Model Based on Decision Tree. In Proceedings of the 2013 International Conference on Advanced Cloud and Big Data, Nanjing, China, 13–15 December 2013; pp. 64–68. [Google Scholar] [CrossRef]
  49. Lau, S.L. Comparison of orientation-independent-based-independent-based movement recognition system using classification algorithms. In Proceedings of the 2013 IEEE Symposium on Wireless Technology Applications (ISWTA), Kuching, Malaysia, 22–25 September 2013; pp. 322–326. [Google Scholar] [CrossRef]
  50. Mitchell, E.; Monaghan, D.; O’Connor, N.E. Classification of Sporting Activities Using Smartphone Accelerometers. Sensors 2013, 13, 5317–5337. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Oshin, T.O.; Poslad, S. ERSP: An Energy-Efficient Real-Time Smartphone Pedometer. In Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013; pp. 2067–2072. [Google Scholar] [CrossRef]
  52. Bujari, A.; Licar, B.; Palazzi, C.E. Movement pattern recognition through smartphone’s accelerometer. In Proceedings of the 2012 IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, USA, 14–17 January 2012; pp. 502–506. [Google Scholar] [CrossRef]
  53. Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity Recognition Using Cell Phone Accelerometers. SIGKDD Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
  54. Lau, S.L.; David, K. Movement recognition using the accelerometer in smartphones. In Proceedings of the 2010 Future Network Mobile Summit, Florence, Italy, 16–18 June 2010; pp. 1–9. [Google Scholar]
  55. Lau, S.L.; König, I.; David, K.; Parandian, B.; Carius-Düssel, C.; Schultz, M. Supporting patient monitoring using activity recognition with a smartphone. In Proceedings of the 2010 7th International Symposium on Wireless Communication Systems, York, UK, 19–22 September 2010; pp. 810–814. [Google Scholar] [CrossRef]
  56. Stenneth, L.; Wolfson, O.; Yu, P.S.; Xu, B. Transportation mode detection using mobile phones and GIS information. In Proceedings of the 19th ACM SIGSPATIAL international conference on advances in geographic information systems, Chicago, IL, USA, 1–4 November 2011; pp. 54–63. [Google Scholar]
  57. Borazio, M.; Van Laerhoven, K. Using time use with mobile sensor data: A road to practical mobile activity recognition? In Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, Lulea, Sweden, 2–5 December 2013; pp. 1–10. [Google Scholar]
  58. Hong, J.H.; Ramos, J.; Shin, C.; Dey, A.K. An activity recognition system for ambient assisted living environments. In International Competition on Evaluating AAL Systems Through Competitive Benchmarking; Springer: Berlin, Germany, 2012; pp. 148–158. [Google Scholar]
  59. Ignatov, A.D.; Strijov, V.V. Human activity recognition using quasiperiodic time series collected from a single tri-axial accelerometer. Multimed. Tools Appl. 2016, 75, 7257–7270. [Google Scholar] [CrossRef]
  60. Khan, A.M.; Siddiqi, M.H.; Lee, S.W. Exploratory data analysis of acceleration signals to select light-weight and accurate features for real-time activity recognition on smartphones. Sensors 2013, 13, 13099–13122. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Pereira, J.D.; da Silva e Silva, F.J.; Coutinho, L.R.; de Tácio Pereira Gomes, B.; Endler, M. A movement activity recognition pervasive system for patient monitoring in ambient assisted living. In Proceedings of the 31st Annual ACM Symposium on Applied Computing, Pisa, Italy, 4–8 April 2016; pp. 155–161. [Google Scholar]
  62. Torres-Huitzil, C.; Alvarez-Landero, A. Accelerometer-based human activity recognition in smartphones for healthcare services. In Mobile Health; Springer: Berlin, Germany, 2015; pp. 147–169. [Google Scholar]
  63. Tao, D.; Wen, Y.; Hong, R. Multicolumn bidirectional long short-term memory for mobile devices-based human activity recognition. IEEE Internet Things J. 2016, 3, 1124–1134. [Google Scholar] [CrossRef]
  64. Zdravevski, E.; Risteska Stojkoska, B.; Standl, M.; Schulz, H. Automatic machine-learning based identification of jogging periods from accelerometer measurements of adolescents under field conditions. PLoS ONE 2017, 12, e0184216. [Google Scholar] [CrossRef] [PubMed]
  65. Github. Impires/August_2017-_Multi-Sensor_Data_Fusion_in_Mobile_Devices_for_the_Identification_of_Activities_of_Dail. 2018. Available online: https://github.com/impires/August_2017-_Multi-sensor_data_fusion_in_mobile_devices_for_the_identification_of_activities_of_dail (accessed on 20 March 2019).
  66. BQ. Smartphones BQ Aquaris | BQ Portugal. 2019. Available online: https://www.bq.com/pt/smartphones (accessed on 20 March 2019).
  67. Graizer, V. Effect of low-pass filtering and re-sampling on spectral and peak ground acceleration in strong-motion records. In Proceedings of the 15th World Conference of Earthquake Engineering, Lisbon, Portugal, 24–28 September 2012; pp. 24–28. [Google Scholar]
  68. Lameski, P.; Zdravevski, E.; Koceski, S.; Kulakov, A.; Trajkovik, V. Suppression of Intensive Care Unit False Alarms Based on the Arterial Blood Pressure Signal. IEEE Access 2017, 5, 5829–5836. [Google Scholar] [CrossRef]
  69. Hajela, P.; Berke, L. Neural networks in structural analysis and design: An overview. Comput. Syst. Eng. 1992, 3, 525–538. [Google Scholar] [CrossRef]
  70. prateekvjoshi. Understanding Xavier Initialization in Deep Neural Networks. 2016. Available online: https://prateekvjoshi.com/2016/03/29/understanding-xavier-initialization-in-deep-neural-networks/ (accessed on 20 March 2019).
  71. Ng, A.Y. Feature Selection, L1 vs. L2 Regularization, and Rotational Invariance. In ICML ’04, Proceedings of the Twenty-first International Conference on Machine Learning, Banff, AB, Canada, 4–8 July 2004; ACM: New York, NY, USA, 2004; p. 78. [Google Scholar] [CrossRef] [Green Version]
  72. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F. Limitations of the Use of Mobile Devices and Smart Environments for the Monitoring of Ageing People. In Proceedings of the 4th International Conference on Information and Communication Technologies for Ageing Well and e-Health-Volume 1: HSP; Science and Technology Publications: Setúbal, Portugal, 2018; pp. 269–275, ISBN 978-989-758-299-8. [Google Scholar] [CrossRef]
Figure 1. Methodology and system architecture for the recognition of activities of daily living (ADLs).
Figure 1. Methodology and system architecture for the recognition of activities of daily living (ADLs).
Electronics 09 00509 g001
Figure 2. Acceleration (m/s2)—five seconds of data collected during the activity of standing.
Figure 2. Acceleration (m/s2)—five seconds of data collected during the activity of standing.
Electronics 09 00509 g002
Figure 3. Acceleration (m/s2)—five seconds of data collected during the activity of walking.
Figure 3. Acceleration (m/s2)—five seconds of data collected during the activity of walking.
Electronics 09 00509 g003
Figure 4. Acceleration (m/s2)—five seconds of data collected during the activity of running.
Figure 4. Acceleration (m/s2)—five seconds of data collected during the activity of running.
Electronics 09 00509 g004
Figure 5. Acceleration (m/s2)—five seconds of data collected during the activity of walking upstairs.
Figure 5. Acceleration (m/s2)—five seconds of data collected during the activity of walking upstairs.
Electronics 09 00509 g005
Figure 6. Acceleration (m/s2)—five seconds of data collected during the activity of walking downstairs.
Figure 6. Acceleration (m/s2)—five seconds of data collected during the activity of walking downstairs.
Electronics 09 00509 g006
Figure 7. Datasets created for the analysis and recognition of the different ADLs.
Figure 7. Datasets created for the analysis and recognition of the different ADLs.
Electronics 09 00509 g007
Figure 8. Results obtained with the MLP method implemented using non-normalized data with Neuroph and Encog frameworks, and the DNN method implemented with the DeepLearning4j framework (horizontal axis) for the different datasets (series), obtaining the accuracies in percentages (vertical axis).
Figure 8. Results obtained with the MLP method implemented using non-normalized data with Neuroph and Encog frameworks, and the DNN method implemented with the DeepLearning4j framework (horizontal axis) for the different datasets (series), obtaining the accuracies in percentages (vertical axis).
Electronics 09 00509 g008
Figure 9. Results obtained with the MLP method implemented using normalized data with Neuroph and Encog frameworks, and the DNN method implemented with the DeepLearning4j framework (horizontal axis) for the different datasets (series), obtaining the accuracies in percentages (vertical axis).
Figure 9. Results obtained with the MLP method implemented using normalized data with Neuroph and Encog frameworks, and the DNN method implemented with the DeepLearning4j framework (horizontal axis) for the different datasets (series), obtaining the accuracies in percentages (vertical axis).
Electronics 09 00509 g009
Table 1. Summary of the studies available in the literature.
Table 1. Summary of the studies available in the literature.
StudyNumber of ADLsADLs RecognizedFeaturesProposed Methods and AccuracyDevice Location
[22]8Standing; Sitting; Laying; Walking; Walking upstairs; Walking downstairs; Running; Nordic walkingStandard deviation; mean; maximum; minimum73% (Majority Vote Naïve Bayes Nearest Neighbor algorithm (MVNBNN))Smartphone located in trouser front pocket
[23]3Walking; running; walking upstairsMean; standard deviation; euclidean norm of mean; euclidean norm of the standard deviation; correlation values; 25th and 75th percentile values; frequency; amplitude; peak frequency; number of peak values95% (KNN); 89% (Random Forest); 99% (SVM)Smartphone in a pouch and located around waist
[24]3Slow walk; brisk walk; sittingMean; standard deviation; variance90.9% (SVM)Smartphone located in trouser front pocket
[25]6Standing; Sitting; Lying; Walking Upstairs; Walking Downstairs; WalkingMinimum; Maximum; Mean; Standard Deviation; SMA; Signal Vector Magnitude; Tilt Angle; Power Spectral Density (PSD); Signal Entropy; Spectral Energy93.52% (Decision Tree); 69.72% (SVM); 87.2% (MLP)Smartphone located in trouser pocket freely chosen by the user
[26]4walking downstairs; walking upstairs; walking; joggingMean; Variance; Standard Deviation; Maximum; Minimum; Correlation Coefficient; Mean Crossing Value; Peak; Spectral Energy; Power Spectral Density; Interquartile Range; DT-CWT68.56% (SVM); 90.35% (Random Forest); 94.65% (MLP); 85.99% (J48 Decision Tree); 93.44% (KNN); 80.32% (Naive Bayes)Smartphone located into the right jeans pocket
[27]6Sitting; standing; laying; walking; walking upstairs; walking downstairsMean; Standard deviation; Median absolute deviation; Maximum; Minimum; Signal magnitude area; Sum of the squares separated by the quantity of values; Interquartile range; Entropy; Autoregression coefficients; correlation coefficient; index of the frequency segment with biggest magnitude; Weighted average of the frequency segments to acquire a mean recurrence; skewness; kurtosis; Energy of a recurrence interval inside the 64 containers of the FFT of every window; Angle between two vectors97.77% (Decision Tree); 89.99% (KNN); 95.55% (Naive Bayes); 100% (Random Forest); 95.55% (SVM)Smartphone located on the waist
[28]1WalkingMaximum; Minimum; Mean; Range; RMS; Standard Deviation; Zero Crossing Rate; Kurtosis; Spectral Slope97.80% (SVM); 97.64% (Random Forest); 97.64% (Logistic); 98.11% (MLP)Smartphones located into users’ pocket freely chosen by them
[29]1fallingaverage absolute acceleration variation; impact duration; maximum; peak duration; activity level of a window that contains the impact; average acceleration of free-fall stage; number of steps; skewness; kurtosis; interquartile range; power of the impact; standard variation of the impact; square of the highest coefficient; number of peaks97.53% (KNN)Not available
[30]5jogging; walking; sitting; laying down; standingMean; maximum; minimum; median; SMA; Median deviation; PCA; interquartile range94.32% (SVM); 98.74% (MLP); 91.10% (Naive Bayes); 99% (KNN); 98.80% (Decision Tree); 99.01% (kStar)Smartphones located into users’ pocket freely chosen by them
[31]6walking; standing; travel by car; travel by bus; travel by train; travel by metroMean; Median; Maximum; Minimum; RMS; standard deviation; interquartile range; minimum average; maximum average; maximum peak height; average peak height; entropy; FFT spectral energy; Skewness; kurtosis95.6% (J48 Decision Tree); 92.4% (SMO); 61.9% (Naïve Bayes)Smartphone in the pocket (not specified)
[32]1playing tennisMean; Variance; correlation98.12% (Naïve Bayes); 99.61% (MLP); 99.91% (J48 Decision Tree); 100% (SVM)Smartphone located on forearm and in the subject front pocket
[33]1playing fosballMean; Variance; Covariance; Energy; entropy95% (MLP)Smartphone located on pocket and smartwatch located on wrist
[34]7walking; jogging; walking upstairs; walking downstairs; standing; sitting; lying downmean and standard deviation for each axis; bin distribution; heuristic measure of wave periodicity90% (Random Forest)Smartphone located in front pants pocket
[35]5walking; standing; running; walking upstairs; walking downstairsMean; Variance; quartiles80% (Sliding-Window-based Hidden Markov Model (SW-HMM))Smartphones located on belt, right jeans pocket, right arm, and right wrist
[36]5running; walking; sitting; walking upstairs; walking downstairsMean; Variance; standard deviation; median; maximum; minimum; RMS; zero crossing rate; skewness; kurtosis; spectral entropy80% (SVM)4 smartphones located in the left upper arm, the shirt-pocket, the jeans front pocket, and the behind jeans pocket
[37]6walking; walking upstairs; walking downstairs; sitting; standing; layingMean; standard deviation83.55% (Hidden Markov Model Ensemble (HMME))Smartphone located on the waist
[38]4walking; running; standing; sittingMean; Maximum; Minimum; Median; standard deviation99% (MLP)Smartphone located in the user’s pants pocket
[13]4walking; running; standing; sittingMean; Minimum; Maximum; standard deviation92% (Clustered KNN)Smartphone located in the user’s jeans pocket
[39]4walking; running; sitting; standingMean; Variance; bin distribution in time and frequency domain; FFT spectral energy; correlation of the magnitude98.69% (Decision Tree)Smartphone located in the user’s trousers pocket
[40]5standing; walking; walking upstairs; walking downstairs; runningMean; standard deviation; percentiles92% (MLP)Smartphone located at four locations: two front trousers pockets and two back trousers pockets
[41]6standing; sitting; walking upstairs; walking downstairs; walking; joggingDual-tree complex wavelet transform (DT-CWT) statistical information and orientation76% (Random Forest); 73.8% (Instance-based learning (IBk)); 67.4% (J48 Decision Tree); 67.4% (J-Rip)Smartphone located in the user’s trousers pocket
[42]6walking downstairs; jogging; sitting; standing; walking upstairs; walkingMinimum; Maximum; Mean; standard deviation; zero crossing rate for each axis; correlation between axis92.4% (J48 Decision Tree); 91.7% (MLP); 84.3% (Likelihood Ratio (LR))Smartphone located in their front trousers leg pocket
[43]7walking; running; standing; sitting; lying; walking upstairs; walking downstairsMean; Minimum; Maximum; standard deviation77% (DNN)Smarphone located in the right pant pocket
[44]5running; walking; standing; sitting; layingMean; Median; Maximum; Minimum; Root Mean Square (RMS); standard deviation; interquartile range; energy; entropy; skewness; kurtosis99.5% (Decision Tree)Smartphone located in the belt or in the trousers front pocket
[45]4walking; running; cycling; hoppingRMS; Variance; Correlation; energy97.69% (SVM)Smartphone located in the pants front pocket
[46]3walking upstairs; walking up on an escalator; walking on a rampmean, standard deviation, skewness, kurtosis, average absolute deviation, and pairwise correlation of the tree axis of accelerometer; mean of the resultant acceleration80.59% (Decision Tables); 82.97% (J48 Decision Tree); 87.49% (Naïve Bayes); 89.20% (KNN); 87.86% (MLP)Smartphone located in the right or left palms in front of the body
[47]4walking; cycling; running; standingMean; standard deviation; correlation; power spectral density98% (Naïve Bayes); 83% (KNN); 95% (Decision Tree); 96% (SVM)Smartphone located along the waist in the front pocket
[48]5standing; walking; running; walking upstairs; walking downstairsMean; Median; Variance; standard deviation; maximum; minimum; range; RMS; FFT coefficients; FFT spectral energy88.32% (Decision Tree)Smartphone located in different positions such as in the bag, trouser pocket and hands.
[49]5walking; sitting; standing; walking upstairs; walking downstairsMean; standard deviation; variance92.44% (KNN); 90.77% (Decision Tree); 90.4% (rule-based learner (JRip)); 92.91% (MLP)Smartphone located in the user’s trouser pocket
[50]6walking; jogging; walking upstairs; walking downstairs; sitting; standingenergy and variances of the coefficients of discrete wavelet transform (DWT)79.9% (Naïve Bayes); 82.3% (MLP)Smartphone located on the upper crevice of a user’s back
[51]3walking; jogging; runningnumber of peaks; number of troughs; difference between the maximum peak and the minimum trough; sum of all peaks and troughs93.4% (J48 Decision Tree + Decision Table + Naïve Bayes)Smartphone positioned on the palm, front trouser pocket, backpack, and top jacket pocket
[52]1walkingMean; standard deviation98% (MLP)Smarphone located in the user’s pocket
[53]6walking; jogging; walking upstairs; walking downstairs; sitting; standingMean; standard deviation; average absolute difference; average resultant acceleration; time between peaks; binned distribution85.1% (J48 Decision Tree); 78.1% (logistic regression); 91.7% (MLP); 37.2% (Straw Man)Smartphone located in the user’s front pants leg pocket
[54]5walking; standing; sitting; walking upstairs; walking downstairsmean, standard deviation and correlation of the raw data; energy of FFT; mean and standard deviation of the FFT components in the frequency domain95.62% (Bayesian Network); 97.81% (Naïve Bayes); 99.27% (KNN); 93.53% (JRip)Smartphone located in the user’s right trouser pocket
[55]5walking; sitting; standing; walking upstairs; walking downstairsMean; standard deviation; variance; FFT energy; FFT information entropy91.37% (Decision Tree); 94.29% (KNN); 84.42% (SMO)Smartphone located in the user’s trouser pocket
[56]6travel by car; travel by bus; travel by train; walking; travel by bike; standingaverage speed; average acceleration; average bus closeness; average rail closeness; average candidate bus closeness91.6% (Naïve Bayes); 92.5% (Bayesian Network); 92.2% (Decision Trees); 93.7% (Random Forest); 83.3% (MLP)Smartphone located in the user’s waist, arm, pocket, or bag
[57]11sleeping; eating; personal care; working; studying; household work; socializing; sports; hobbies; mass media; travel by caraverage of acceleration; Mean Absolute Difference (MAD) of the acceleration20.76% (SVM)Smartphone located in the user’s arm
[58]11walking; reading; lying down; standing; rearranging books; picking up golf or tennis balls; cycling; falling down; eating; washing handsminimum; maximum; average; median; standard deviation; toughs and peaks of acceleration72% (Hybrid model)Smartphone located in the user’s arm
[59]5walking; jogging; walking upstairs; walking downstairs; standingmean value; mean absolute value; difference between maximum and minimum value; total value of absolute differences96% (k-NN)Smartphone located in the user’s waist
[60]6standing; walking; walking upstairs; walking downstairs; running; hoppingFFT; 42-dimensional time domain features72.62% (Autoregressive (AR) Model)Smartphone located in different locations: Pants’ front pocket (left), Pants’ front pocket (right), Pants’ back pocket (left), Pants’ back pocket (right) and Jacket’s inner pocket
[61]7running; walking upstairs, walking downstairs; walking; standing; lying downaverage; median; Standard deviation90.2% (IBk); 88.2% (Random Florest); 85.5% (Random Tree); 88.1% (J48); 80.3% (JRip); 85.8% (RepTree); 82.9% (MLP)Smartphone located the user’s leg and waist and wearable sensor located in the chest
[62]6running; walking; standing; walking upstairs; walking downstairsstandard deviation; mean; percentiles90.85% (Naïve Bayes); 87.35% (K-NN); 81.16% (SVM)Smartphone is located in the front-right and the back-left pockets
[63]5jumping; running; walking; walking downstairs; walking upstairsaverage acceleration; peaks83.8% (SVM); 83.4% (Empirical risk minimization (ERM)); 79.4% (K-NN); 86.8% (Bidirectional Long Short-Term Memory (BLSTM)); 89.4% (Multi-column Bidirectional Long Short-Term Memory (MBLSTM))Not available
Table 2. Best accuracies obtained with the different frameworks and datasets.
Table 2. Best accuracies obtained with the different frameworks and datasets.
Type of ANNFrameworkDatasetBest Accuracy Achieved (%)
Non-normalised dataMLPNeuroph532.02
Encog174.45
DNNDeepLearning4j580.35
Normalised dataMLPNeuroph324.03
Encog237.07
DNNDeepLearning4j585.89
Table 3. Confusion matrix of the results obtained with non-normalized data by the implementation of the MLP method with the Neuroph framework.
Table 3. Confusion matrix of the results obtained with non-normalized data by the implementation of the MLP method with the Neuroph framework.
Walking DownstairsWalking UpstairsRunningStandingWalking
True Positive23147102000
True Negative34743473200534761476
False Positive1998199752920000
False Negative45264527599545246524
Table 4. Confusion matrix of the results obtained with normalized data by the implementation of the MLP method with the Neuroph framework.
Table 4. Confusion matrix of the results obtained with normalized data by the implementation of the MLP method with the Neuroph framework.
Walking DownstairsWalking UpstairsRunningStandingWalking
True Positive001620200
True Negative2162216220002162162
False Positive20002000183820000
False Negative58385838600058387838
Table 5. Confusion matrix of the results obtained with non-normalized data by the implementation of MLP method with Encog framework.
Table 5. Confusion matrix of the results obtained with non-normalized data by the implementation of MLP method with Encog framework.
Walking DownstairsWalking UpstairsRunningStandingWalking
True Positive00100100
True Negative10011001010011001
False Positive2000200099920002000
False Negative69996999800069996999
Table 6. Confusion matrix of the results obtained with normalized data by the implementation of the MLP method with the Encog framework.
Table 6. Confusion matrix of the results obtained with normalized data by the implementation of the MLP method with the Encog framework.
Walking DownstairsWalking UpstairsRunningStandingWalking
True Positive10002000
True Negative20002001200120011
False Positive19992000200020000
False Negative60005999599959997999
Table 7. Confusion matrix of the results obtained with non-normalized data by the implementation of the DNN method with the DeepLearning4j framework.
Table 7. Confusion matrix of the results obtained with non-normalized data by the implementation of the DNN method with the DeepLearning4j framework.
Walking DownstairsWalking UpstairsRunningStandingWalking
True Positive2900020000
True Negative7786799980005067999
False Positive2141074941
False Negative17102000200002000
Table 8. Confusion matrix of the results obtained with normalized data by the implementation of the DNN method with the DeepLearning4j framework.
Table 8. Confusion matrix of the results obtained with normalized data by the implementation of the DNN method with the DeepLearning4j framework.
Walking DownstairsWalking UpstairsRunningStandingWalking
True Positive13341639190919851722
True Negative76417317797879417712
False Positive3596832259288
False Negative6663619115278

Share and Cite

MDPI and ACS Style

Pires, I.M.; Marques, G.; Garcia, N.M.; Flórez-Revuelta, F.; Canavarro Teixeira, M.; Zdravevski, E.; Spinsante, S.; Coimbra, M. Pattern Recognition Techniques for the Identification of Activities of Daily Living Using a Mobile Device Accelerometer. Electronics 2020, 9, 509. https://doi.org/10.3390/electronics9030509

AMA Style

Pires IM, Marques G, Garcia NM, Flórez-Revuelta F, Canavarro Teixeira M, Zdravevski E, Spinsante S, Coimbra M. Pattern Recognition Techniques for the Identification of Activities of Daily Living Using a Mobile Device Accelerometer. Electronics. 2020; 9(3):509. https://doi.org/10.3390/electronics9030509

Chicago/Turabian Style

Pires, Ivan Miguel, Gonçalo Marques, Nuno M. Garcia, Francisco Flórez-Revuelta, Maria Canavarro Teixeira, Eftim Zdravevski, Susanna Spinsante, and Miguel Coimbra. 2020. "Pattern Recognition Techniques for the Identification of Activities of Daily Living Using a Mobile Device Accelerometer" Electronics 9, no. 3: 509. https://doi.org/10.3390/electronics9030509

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop