You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

21 February 2018

Approach for the Development of a Framework for the Identification of Activities of Daily Living Using Sensors in Mobile Devices

,
,
,
and
1
Instituto de Telecomunicações, Universidade da Beira Interior, 6201-001 Covilhã, Portugal
2
Altranportugal, 1990-096 Lisbon, Portugal
3
ALLab—Assisted Living Computing and Telecommunications Laboratory, Computing Science Department, Universidade da Beira Interior, 6201-001 Covilhã, Portugal
4
ECATI, Universidade Lusófona de Humanidades e Tecnologias, 1749-024 Lisbon, Portugal
This article belongs to the Special Issue Smart Sensors for Mechatronic and Robotic Systems

Abstract

Sensors available on mobile devices allow the automatic identification of Activities of Daily Living (ADL). This paper describes an approach for the creation of a framework for the identification of ADL, taking into account several concepts, including data acquisition, data processing, data fusion, and pattern recognition. These concepts can be mapped onto different modules of the framework. The proposed framework should perform the identification of ADL without Internet connection, performing these tasks locally on the mobile device, taking in account the hardware and software limitations of these devices. The main purpose of this paper is to present a new approach for the creation of a framework for the recognition of ADL, analyzing the allowed sensors available in the mobile devices, and the existing methods available in the literature.

1. Introduction

Sensors embedded in off-the-shelf mobile devices, e.g., accelerometers, gyroscopes, magnetometers, microphones, and Global Positioning System (GPS) receivers [1], may be used in the development of algorithms for the recognition of Activities of Daily Living (ADL) [2] and the environments in which they are carried out. These algorithms are part of the development of a Personal Digital Life Coach (PDLC) [3]. According to [3], a PDLC “(…) will monitor our actions and activities, be able to recognize its user state of mind, and propose measures that not only will allow the user to achieve his/her stated goals, but also to act as an intermediate health and well-being agent between the user and his/her immediate care givers (…)”. This work is related to the development of ambient assisted living (AAL) systems, and, due to the increasing demands in our society, it is a field with high importance [4]. Due to recent advances in technology, there is an increasing number of research studies in this field for the monitoring of people with impairments and older people in a plethora of situations by using AAL technologies, including mobile devices and smart environments [5].
Multi-sensor data fusion technologies may be implemented with mobile devices, because they incorporate several sensors, such as motion sensors, magnetic/mechanical sensors, acoustic sensors, and location sensors [6], improving the accuracy of the recognition of several types of activities, e.g., walking, running, going downstairs, going upstairs, watching TV, and standing, and environments, e.g., bar, classroom, gym, library, kitchen, street, hall, living room, and bedroom. The selection of the activities and environments that will be included in the framework was based in the activities previously recognized with best accuracies, and, in the case of the environments, there are a lack of studies related to the environment recognition, taking into account some of the environments previously recognized and the most common environments [7]. The recognition of ADL may be performed with motion, magnetic/mechanical and location sensors, and the environments may be recognized with acoustic sensors. In order to improve the recognition of the ADL, the environment recognized may be fused with the other features extracted from the other sensors.
In accordance with previous works [6,8,9], the main motivation of this paper is to present the architecture of a framework for the recognition of ADL and their environments, which takes advantage of the use of a wide set of sensors available in a mobile device, also aiming at reducing the current complexity and constraints in the development of these systems. The test and validation of this framework is currently the subject of another step of this research plan [9], which includes the acquisition of a dataset that contains approximately 2.7 h of data collected from the accelerometer, gyroscope, magnetometer, microphone and GPS receiver, related to each activity and environment. During the collection phase, the data were acquired with the mobile device located in the front pocket of the trousers by 25 subjects aged between 16 and 60 years old and different lifestyles (10 mainly active and 15 mainly sedentary) and gender (10 female and 15 male). The activities performed and the environments frequented were labelled by the user. The subjects used their personal mobile phones with their applications running, where the mainly used device was a BQ Aquarius device [10].
The identification of ADL and environments using sensors has been studied during the last years, and several methods and frameworks [11,12,13,14,15,16] have been implemented using smartphones. However, this is a complex problem that should be separated into different stages, such as data acquisition, processing, and fusion; and artificial intelligence systems. The frameworks developed in previous studies are commonly only focused on some specific parts of the problem. For example, the Acquisition Cost-Aware QUery Adaptation (ACQUA) framework [17] has been designed for data acquisition and data processing, but it does not include all the steps needed for data processing.
There are no predefined standards for the creation of a framework for the recognition of the ADL [18,19,20], and the most implemented methods for the recognition of ADL are related to the use of motion sensors. However, there are methods and sensors that can be fused for the creation of a structured framework as a holistic approach to the identification of the ADL and environments presented in this paper.
Around the concept of sensors’ data fusion, the selection of the sensors to use is the first step for the creation of the framework, defining a method for the acquisition of the data, and, consequently, their processing. The processing of the data includes data cleaning, data imputation, and extraction of the features. Data segmentation techniques are not considered, as this study was designed for local execution on mobile devices and, due to the low memory and power processing restrictions of these devices, only a short sample of the sensors’ data can be used (initial research points to 5 s samples). This strategy makes it unsuitable to apply data segmentation techniques while still making it possible to deploy the framework in scarce resource devices. The final step in the proposed framework is the selection of the best features, and then the application of artificial intelligence techniques, i.e., the implementation of three types of Artificial Neural Networks (ANN), such as Multilayer Perceptron (MLP) with Backpropagation, Feedforward Neural Networks (FNN) with Backpropagation and Deep Neural Networks (DNN), in order to choose the best method for the accurate recognition of the ADL and the environments.
The remaining sections of this paper are organized as follows: Section 2 presents the state of the art in this topic, presenting a set of methods for each module/stage. Section 3 presents the framework for the identification of ADL using the sensors available in off-the-shelf mobile devices, the sensors and the methods that may be used. Section 4 presents a discussion and conclusions about the new approach proposed.

3. Methods and Expected Results

The new approach proposed for the creation of the framework for the identification of ADL (Figure 1) is based on [6,8,9], and it is composed by several stages. They are: the selection of the sensors, the data and processing, including data cleaning, imputation, and feature extraction, data fusion, the identification of ADL with artificial intelligence, including pattern recognition, and other machine learning techniques, and, at the end, the combination of the results obtained with the data available in the users’ agenda.
In order to create a new approach for a framework for the identification of ADL and their environments, the architecture, presented in Figure 1, and set of methods presented in Section 2 are proposed for obtaining results with reliable accuracy.
Following the list of sensors available in off-the-shelf mobile devices, presented in Section 2.1, the sensors that will be used in the framework should be dynamically selected, according to the sensors available in the mobile device. Thus, the types of sensors selected to use in the framework will be motion sensors, magnetic/mechanical sensors, acoustic sensors, and location sensors. The accelerometer is available in all mobile devices, but the gyroscope is only available on some devices, therefore, to cover the execution of the framework in all devices, two different methods should be implemented, one considering the data from the accelerometer and the gyroscope, and another considering only the data from the accelerometer. The magnetometer is only available on some devices, therefore this sensor should be managed similarly. Related to the acoustic sensors, the microphone is available in all mobile devices. As to the location sensors, the GPS is available in most of the mobile devices and its data should be used in the framework whenever possible.
The data acquisition methods are not directly related to the development of the framework, because the different manufacturers of the mobile operating systems have different methodologies to acquire the different types of sensors’ data. Thus, the data acquisition methods, presented in Section 2.2, should take in account the limitations of the mobile devices. Based on previous research studies and preliminary experiments, acquiring only 5 s of data from the selected sensors every 5 min is sufficient for the identification of the ADL and environments.
Following the creation of the new approach for a framework for the identification of ADL and their environments, the selection of data processing methods, presented in Section 2.3, should contain the data cleaning, data imputation, and feature extraction methods.
The data cleaning methods adapted for the framework depends on the types of sensors. On the one hand, for the accelerometer, gyroscope, and magnetometer sensors, the data cleaning method that should be applied is a low pass filter to remove the noise and the value of the gravity acquired during the data acquisition process. On the other hand, for the acoustic sensors, the data cleaning method that should be applied is the FFT in order to extract the frequencies of the audio. As the location sensors return values that are in nature already a result (e.g., GPS coordinates), data cleaning methods are not significant. Nevertheless, and as future work, it may be necessary to implement algorithms that increase the accuracy of these sensors as to better contribute to a quality data fusion process.
The data imputation methods is not important to implement in the development of a new approach for a framework for the identification of ADL and their environments, assuming that the data acquired from all sensors is always filled.
Related to the feature extraction, the features needed to recognize the ADL and their environments should be selected based on the type of sensors and on the selected features already reported in the literature and presented in Section 2.3.3. Firstly, the features selected for the accelerometer, gyroscope, and magnetometer sensors are the five greater distances between the maximum peaks, the average of the maximum peaks, the standard deviation of the maximum peaks, the variance of the maximum peaks, the median of the maximum peaks, the standard deviation of the raw signal, the average of the raw signal, the maximum value of the raw signal, the minimum value of the raw signal, the variance of the of the raw signal, and the median of the raw signal. Secondly, the features selected for the microphone are the standard deviation of the raw signal, the average of the raw signal, the maximum value of the raw signal, the minimum value of the raw signal, the variance of the of the raw signal, the median of the raw signal, and 26 MFCC coefficients. Finally, the features selected for the GPS receiver are the distance travelled during the acquisition time.
Before the presentation of the data fusion and pattern recognition methods that should be used for in the framework, the ADL and environments to recognize should be defined. This process should be executed with several sensors, that will be combined as presented in the Figure 2 and Table 7, being these the necessary stages:
Figure 2. Sensors used for the recognition of Activities of Daily Living (ADL) and environments for each phase of development.
Table 7. Sensors, Activities of Daily Living (ADL), and environments for recognition with the framework proposed.
  • Firstly, the ADL are recognized with motion and magnetic/mechanical sensors;
  • Secondly, the identification of the environments is performed with acoustic sensors;
  • Finally, there are two options, being these:
    The identification of standing activities with the fusion of the data acquired from motion and magnetic/mechanical sensors, and the environment recognized, where the number of ADL recognized depends on the number of sensors available;
    The identification of standing activities with the fusion of the data acquired from motion, magnetic/mechanical and location sensors, and the environment recognized, where the number of ADL recognized depends on the number of sensors available.
In identifying the environments, what is intended is to identify the associated activity, i.e., the sound generated in a classroom is not only the sound of the room itself, but rather the sound of a class who is having a lesson in a classroom. This is to say that an environment is to be considered as a place where some activity occurs in a given time of the day or the week, so there will be the need to consider different types of “Street” environments as they will have different audio signatures at different times of the day or week and of course, in different streets. All the proposed environments shown in Figure 2 are expected to be plural.
Firstly, the ADL to be identified with the framework will be going downstairs, going upstairs, running, walking, and standing, because they are part of the most recognized ADL in previous studies with reliable accuracy [7]. Secondly, the proposed environments to identify with the framework will be bar, classroom, gym, kitchen, library, street, hall, watching TV, and bedroom, because the existence of previous studies related to the recognition of environments is very limited, the proposed framework will take in account the most common environments and some of the environments previously recognized [7]. Thirdly, the proposed ADL to distinct with the framework will be sleeping, and standing, because the ADL may be confused as standing ADL and the inclusion of the environment recognized as an input for the classification method will help in the accurate recognition of these ADL. Finally, the proposed ADL to distinct with the framework are sleeping, standing, and driving, because the driving may also confused as standing ADL and, in order to accurately distinct these ADL, the environment recognized and the features extracted from the GPS receiver should be included. As the data for the creation of the methods for the recognition of ADL and environments was acquired will several conditions and different people, the generated method with ANN will be generic and the calibration of sensor is not needed.
Based on the list of data fusion methods and pattern recognition methods, defined in Section 2.4 and Section 2.5, the method selected for the implementation in the new approach for a framework for the identification of ADL and their environments will be based in ANN methods, because, based on the literature, it is one of the methods that reports the best accuracies. However, the selection of the best type of ANN will be done with the comparison of the results obtained with three types of ANN selected. The types of ANN that will be tested to the acquired data are:
  • MLP with Backpropagation;
  • FNN with Backpropagation;
  • DNN.
Regarding the data acquired from GPS receiver, it can be useful to increase the accuracy of the identification of the ADL and their environments, but it can also be used for the identification of the location where the ADL are executed, in order to improve the comparison with the users’ agenda presented in Section 2.6.

4. Discussion and Conclusions

This paper presents the architecture of a new approach for a framework for the identification of ADL and their environments, using methods with a reported good accuracy. The development of the new approach for the development of a framework for the identification of ADL and their environments, based on the system presented in [6,8,9], is one of the steps for the creation of a personal digital life coach [3] using mobile devices.
The framework will be composed by several modules several, such as data acquisition, data processing, data fusion, and a module to implement artificial intelligence techniques for the identification of the ADL and their environments.
The sensors used in the framework will be accelerometer, gyroscope, magnetometer, microphone, and GPS receiver, in order to recognize several ADL, including going downstairs, going upstairs, running, walking, standing, sleeping, and driving, and their environments, including bar, classroom, gym, kitchen, library, street, hall, watching TV, and bedroom.
The sensors’ data should be acquired and, before the extraction of the features of the sensors’ data, filters such as low pass filter and FFT, should be applied. Afterwards, the data fusion and pattern recognition methods should be applied for the recognition of ADL and environments.
This paper consists on a conceptual definition of the framework for the recognition of the ADL and their environments, proposing three possible methods for this purpose, based on the use of the ANN methods. In order to define the best method, the future implementation of the proposed methods will compare the differences between them, including the accuracy, performance, and adaptability for the development of a local processing framework for mobile devices. It will include the acquisition of a large set of sensors’ data related to the ADL and environments proposed for the creation of training and testing sets and further validation of the developed methods. Additionally, and also as future work, the framework will allow each user to validate the ADL identified by the framework when this is not the real performed activity.
Due to the inexistence of previous studies that review the use of all sensors available in current off-the-shelf mobile devices, our proposed framework is a function of the number of sensors available in the mobile device used, proving a reliable feedback in almost real-time.

Acknowledgments

This work was supported by FCT project UID/EEA/50008/2013. The authors would also like to acknowledge the contribution of the COST Action IC1303–AAPELE–Architectures, Algorithms and Protocols for Enhanced Living Environments.

Author Contributions

All the authors have contributed with the structure, content, and writing of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Salazar, L.H.A.; Lacerda, T.; Nunes, J.V.; von Wangenheim, C.G. A Systematic Literature Review on Usability Heuristics for Mobile Phones. Int. J. Mob. Hum. Comput. Interact. 2013, 5, 50–61. [Google Scholar] [CrossRef]
  2. Foti, D.; Koketsu, J.S. Activities of daily living. In Pedretti’s Occupational Therapy: Practical Skills for Physical Dysfunction; Elsevier Health Sciences: Amsterdam, Netherlands, 2013; Volume 7, pp. 157–232. [Google Scholar]
  3. Garcia, N.M. A Roadmap to the Design of a Personal Digital Life Coach. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2016; Volume 399. [Google Scholar]
  4. Kleinberger, T.; Becker, M.; Ras, E.; Holzinger, A.; Müller, P. Ambient intelligence in assisted living: Enable elderly people to handle future interfaces. In Lecture Notes in Computer Science; Springer: Berlin, Germany, 2007; Volume 4555. [Google Scholar]
  5. Singh, D.; Kropf, J.; Hanke, S.; Holzinger, A. Ambient Assisted Living Technologies from the Perspectives of Older People and Professionals. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10410. [Google Scholar]
  6. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F. From Data Acquisition to Data Fusion: A Comprehensive Review and a Roadmap for the Identification of Activities of Daily Living Using Mobile Devices. Sensors 2016, 16, 184. [Google Scholar] [CrossRef] [PubMed]
  7. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F.; Zdravevski, E.; Spinsante, S. Machine Learning Algorithms for the Identification of Activities of Daily Living Using Mobile Devices: A Comprehensive Review. engrXiv, engrxiv.org/k6rxa. 2018. (In Review) [Google Scholar]
  8. Pires, I.M.; Garcia, N.M.; Flórez-Revuelta, F. Multi-sensor data fusion techniques for the identification of activities of daily living using mobile devices. In Proceedings of the ECMLPKDD 2015 ECML PKDD—European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Porto, Portugal, 7–11 September 2015. [Google Scholar]
  9. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F. Identification of Activities of Daily Living Using Sensors Available in off-the-shelf Mobile Devices: Research and Hypothesis. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2016; Volume 476. [Google Scholar]
  10. Smartphones: BQ Aquaris and BQ Portugal. Available online: https://www.bq.com/pt/smartphones (accessed on 2 September 2017).
  11. Banos, O.; Damas, M.; Pomares, H.; Rojas, I. On the use of sensor fusion to reduce the impact of rotational and additive noise in human activity recognition. Sensors 2012, 12, 8039–8054. [Google Scholar] [CrossRef] [PubMed]
  12. Akhoundi, M.A.A.; Valavi, E. Multi-Sensor Fuzzy Data Fusion Using Sensors with Different Characteristics. arXiv preprint, 2010; arXiv:1010.6096. [Google Scholar]
  13. Paul, P.; George, T. An Effective Approach for Human Activity Recognition on Smartphone. In Proceedings of the 2015 IEEE International Conference on Engineering and Technology (ICETECH), Coimbatore, India, 20–20 March 2015; pp. 45–47. [Google Scholar]
  14. Hsu, Y.W.; Chen, K.H.; Yang, J.J.; Jaw, F.S. Smartphone-based fall detection algorithm using feature extraction. In Proceedings of the 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Datong, China, 15–17 October 2016. [Google Scholar]
  15. Dernbach, S.; Das, B.; Krishnan, N.C.; Thomas, B.L.; Cook, D.J. Simple and Complex Activity Recognition through Smart Phones. In Proceedings of the 8th International Conference on Intelligent Environments (IE), Guanajuato, Mexico, 26–29 June 2012. [Google Scholar]
  16. Shen, C.; Chen, Y.F.; Yang, G.S. On Motion-Sensor Behavior Analysis for Human-Activity Recognition via Smartphones. In Proceedings of the 2016 Ieee International Conference on Identity, Security and Behavior Analysis (ISBA), Sendai, Japan, 29 February–2 March 2016. [Google Scholar]
  17. Misra, A.; Lim, L. Optimizing Sensor Data Acquisition for Energy-Efficient Smartphone-Based Continuous Event Processing. In Proceedings of the 12th IEEE International Conference on Mobile Data Management (MDM), Lulea, Sweden, 6–9 June 2011; pp. 88–97. [Google Scholar]
  18. D’Ambrosio, A.; Aria, M.; Siciliano, R. Accurate Tree-based Missing Data Imputation and Data Fusion within the Statistical Learning Paradigm. J. Classif. 2012, 29, 227–258. [Google Scholar] [CrossRef]
  19. Dong, J.; Zhuang, D.; Huang, Y.; Fu, J. Advances in multi-sensor data fusion: algorithms and applications. Sensors 2009, 9, 7771–7784. [Google Scholar] [CrossRef] [PubMed]
  20. King, R.C.; Villeneuve, E.; White, R.J.; Sherratt, R.S.; Holderbaum, W.; Harwin, W.S. Application of data fusion techniques and technologies for wearable health monitoring. Med. Eng. Phys. 2017, 42, 1–12. [Google Scholar] [CrossRef] [PubMed]
  21. White, R.M. A Sensor Classification Scheme. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 1987, 34, 124–126. [Google Scholar] [CrossRef] [PubMed]
  22. Bojinov, H.; Michalevsky, Y.; Nakibly, G.; Boneh, D. Mobile device identification via sensor fingerprinting. arXiv preprint, 2014; arXiv:1408.1416. [Google Scholar]
  23. Katevas, K.; Haddadi, H.; Tokarchuk, L. Sensingkit: Evaluating the sensor power consumption in ios devices. In Proceedings of the 12th International Conference on Intelligent Environments (IE), London, UK, 14–16 September 2016. [Google Scholar]
  24. Bersch, S.D.; Azzi, D.; Khusainov, R.; Achumba, I.E.; Ries, J. Sensor data acquisition and processing parameters for human activity classification. Sensors 2014, 14, 4239–4270. [Google Scholar] [CrossRef] [PubMed]
  25. Kang, S.; Lee, Y.; Min, C.; Ju, Y.; Park, T.; Lee, J.; Rhee, Y.; Song, J. Orchestrator: An active resource orchestration framework for mobile context monitoring in sensor-rich mobile environments. In Proceedings of the 2010 IEEE International Conference on Pervasive Computing and Communications (PerCom), Mannheim, Germany, 29 March–2 April 2010. [Google Scholar]
  26. Vallina-Rodriguez, N.; Crowcroft, J. ErdOS: Achieving energy savings in mobile OS. In Proceedings of the sixth international workshop on MobiArch, Bethesda, MD, USA, 28 June 2011; pp. 37–42. [Google Scholar]
  27. Priyantha, B.; Lymberopoulos, D.; Jie, L. LittleRock: Enabling Energy-Efficient Continuous Sensing on Mobile Phones. IEEE Pervasive Comput. 2011, 10, 12–15. [Google Scholar] [CrossRef]
  28. Lu, H.; Yang, J.; Liu, Z.; Lane, N.D.; Choudhury, T.; Campbell, A.T. The Jigsaw continuous sensing engine for mobile phone applications. In Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems, Zürich, Switzerland, 3–5 November 2010; pp. 71–84. [Google Scholar]
  29. Rachuri, K.K.; Mascolo, C.; Musolesi, M.; Rentfrow, P.J. SociableSense: Exploring the trade-offs of adaptive sampling and computation offloading for social sensing. In Proceedings of the 17th annual international conference on Mobile computing and networking, Las Vegas, NV, USA, 19–23 September 2011; pp. 73–84. [Google Scholar]
  30. Gupta, H.P.; Chudgar, H.S.; Mukherjee, S.; Dutta, T.; Sharma, K. A continuous hand gestures recognition technique for human-machine interaction using accelerometer and gyroscope sensors. IEEE Sens. J. 2016, 16, 6425–6432. [Google Scholar] [CrossRef]
  31. Deshpande, A.; Guestrin, C.; Madden, S.R.; Hellerstein, J.M.; Hong, W. Model-driven data acquisition in sensor networks. In Proceedings of the Thirtieth International Conference on Very Large Data Bases—2004, VLDB Endowment, Toronto, Canada, 31 August–3 September 2004; Volume 30, pp. 588–599. [Google Scholar]
  32. Kubota, H.; Kyokane, M.; Imai, Y.; Ando, K.; Masuda, S.I. A Study of Data Acquisition and Analysis for Driver’s Behavior and Characteristics through Application of Smart Devices and Data Mining. In Proceedings of the Third International Conference on Computer Science, Computer Engineering, and Education Technologies, Lodz, Poland, 19–21 September 2016. [Google Scholar]
  33. Ayu, M.A.; Mantoro, T.; Matin, A.F.A.; Basamh, S.S. Recognizing user activity based on accelerometer data from a mobile phone. In Proceedings of the 2011 IEEE Symposium on Computers & Informatics (ISCI), Kuala Lumpur, Malaysia, 20–23 March 2011. [Google Scholar]
  34. Banos, O.; Garcia, R.; Holgado-Terriza, J.A.; Damas, M.; Pomares, H.; Rojas, I.; Saez, A.; Villalonga, C. mHealthDroid: A novel framework for agile development of mobile health applications. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2014; Volume 8868. [Google Scholar]
  35. Chavan, V.B.; Mhala, N. Development of Hand Gesture Recognition Framework Using Surface EMG and Accelerometer Sensor for Mobile Devices. 2015. Available online: https://www.irjet.net/archives/V2/i5/IRJET-V2I542.pdf (accessed on 23 December 2017).
  36. Sarkar, M.; Haider, M.Z.; Chowdhury, D.; Rabbi, G. An Android based human computer interactive system with motion recognition and voice command activation. In Proceedings of the 5th International Conference on Informatics, Electronics and Vision (ICIEV), Dhaka, Bangladesh, 13–14 May 2016. [Google Scholar]
  37. Pires, I.M.; Garcia, N.M.; Pombo, N.; Flórez-Revuelta, F.; Rodríguez, N.D. Validation Techniques for Sensor Data in Mobile Health Applications. J. Sens. 2016, 2016, 2839372. [Google Scholar] [CrossRef]
  38. Lane, N.D.; Miluzzo, E.; Lu, H.; Peebles, D.; Choudhury, T.; Campbell, A.T. A survey of mobile phone sensing. IEEE Commun. Mag. 2010, 48. [Google Scholar] [CrossRef]
  39. Banos, O.; Toth, M.A.; Damas, M.; Pomares, H.; Rojas, I. Dealing with the effects of sensor displacement in wearable activity recognition. Sensors 2014, 14, 9995–10023. [Google Scholar] [CrossRef] [PubMed]
  40. Pejovic, V.; Musolesi, M. Anticipatory Mobile Computing. ACM Comput. Surv. 2015, 47, 1–29. [Google Scholar] [CrossRef]
  41. Lin, F.X.; Rahmati, A.; Zhong, L. Dandelion: A framework for transparently programming phone-centered wireless body sensor applications for health. In Proceedings of the 10th Wireless Health, San Diego, CA, USA, 5–7 October 2010. [Google Scholar]
  42. Postolache, O.; Girão, P.S.; Ribeiro, M.; Guerra, M.; Pincho, J.; Santiago, F.; Pena, A. Enabling telecare assessment with pervasive sensing and Android OS smartphone. In Proceedings of the 2011 IEEE International Workshop on Medical Measurements and Applications Proceedings (MeMeA), Bari, Italy, 30–31 May 2011. [Google Scholar]
  43. Jeffery, S.R.; Alonso, G.; Franklin, M.J.; Hong, W.; Widom, J. Declarative Support for Sensor Data Cleaning. In Lecture Notes in Computer Science; Springer: Berlin, Germany, 2006; Volume 2006. [Google Scholar]
  44. Tomar, D.; Agarwal, S. A survey on pre-processing and post-processing techniques in data mining. Int. J. Database Theory Appl. 2014, 7, 99–128. [Google Scholar] [CrossRef]
  45. Park, K.; Becker, E.; Vinjumur, J.K.; Le, Z.; Makedon, F. Human behavioral detection and data cleaning in assisted living environment using wireless sensor networks. In Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 9–13 June 2009. [Google Scholar]
  46. Zhuang, Y.; Chen, L.; Wang, X.S.; Lian, J. A weighted moving average-based approach for cleaning sensor data. In Proceedings of the 27th International Conference on Distributed Computing Systems (ICDCS'07), Toronto, ON, Canada, 25–27 June 2007. [Google Scholar]
  47. Li, Z.; Wang, J.; Gao, J.; Li, B.; Zhou, F. A vondrak low pass filter for IMU sensor initial alignment on a disturbed base. Sensors 2014, 14, 23803–23821. [Google Scholar] [CrossRef] [PubMed]
  48. Graizer, V. Effect of low-pass filtering and re-sampling on spectral and peak ground acceleration in strong-motion records. In Proceedings of the 15th World Conference of Earthquake Engineering, Lisbon, Portugal, 24–28 September 2012. [Google Scholar]
  49. UiO: Fourier Analysis and Applications to Sound Processing. Available online: http://www.uio.no/studier/emner/matnat/math/MAT.../v12/part1.pdf (accessed on 27 August 2017).
  50. Ninness, B. Spectral Analysis Using the FFT. Available online: https://pdfs.semanticscholar.org/dd74/4c224d569bd9ae907b7527e7f2a92fafa19c.pdf (accessed on 27 August 2017).
  51. Vateekul, P.; Sarinnapakorn, K. Tree-Based Approach to Missing Data Imputation. In Proceedings of the IEEE International Conference on 2009 Data Mining Workshops (ICDMW '09), Miami, FL, USA, 6 December 2009; pp. 70–75. [Google Scholar]
  52. Ling, W.; Dong, M. Estimation of Missing Values Using a Weighted K-Nearest Neighbors Algorithm. In Proceedings of the International Conference on 2009 Environmental Science and Information Application Technology (ESIAT 2009), Wuhan, China; pp. 660–663.
  53. García-Laencina, P.J.; Sancho-Gómez, J.L.; Figueiras-Vidal, A.R.; Verleysen, M. K nearest neighbours with mutual information for simultaneous classification and missing data imputation. Neurocomputing 2009, 72, 1483–1493. [Google Scholar] [CrossRef]
  54. Rahman, S.A.; Rahman, S.A.; Huang, Y.; Claassen, J.; Kleinberg, S. Imputation of Missing Values in Time Series with Lagged Correlations. In Proceedings of the 2014 IEEE International Conference on Data Mining Workshop (ICDMW), Shenzhen, China, 14 December 2014. [Google Scholar]
  55. Batista, G.E.; Monard, M.C. A Study of K-Nearest Neighbour as an Imputation Method. HIS 2002, 87, 251–260. [Google Scholar]
  56. Hruschka, E.R.; Hruschka, E.R.; Ebecken, N.F.F. Towards Efficient Imputation by Nearest-Neighbors: A Clustering-Based Approach. In AI 2004: Advances in Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2004; pp. 513–525. [Google Scholar]
  57. Luo, J.W.; Yang, T.; Wang, Y. Missing value estimation for microarray data based on fuzzy C-means clustering. In Proceedings of the Eighth International Conference on High-Performance Computing in Asia-Pacific Region, Beijing, China, 30 November–3 December 2005. [Google Scholar]
  58. Ni, D.; Leonard, J.D.; Guin, A.; Feng, C. Multiple Imputation Scheme for Overcoming the Missing Values and Variability Issues in ITS Data. J. Trans. Eng. 2005, 131, 931–938. [Google Scholar] [CrossRef]
  59. Smith, B.; Scherer, W.; Conklin, J. Exploring Imputation Techniques for Missing Data in Transportation Management Systems. Trans. Res. Rec. 2003, 1836, 132–142. [Google Scholar] [CrossRef]
  60. Qu, L.; Zhang, Y.; Hu, J.; Jia, L.; Li, L. A BPCA based missing value imputing method for traffic flow volume data. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 985–990. [Google Scholar]
  61. Jiang, N.; Gruenwald, L. Estimating Missing Data in Data Streams. In Proceedings of the 12th International Conference on Database Systems for Advanced Applications (DASFAA'07), Bangkok, Thailand, 9–12 April 2007; pp. 981–987. [Google Scholar]
  62. Rahman, S.A.; Huang, Y.; Claassen, J.; Heintzman, N.; Kleinberg, S. Combining Fourier and lagged k-nearest neighbor imputation for biomedical time series data. J. Biomed. Inform. 2015, 58, 198–207. [Google Scholar] [CrossRef] [PubMed]
  63. Huang, X.-Y.; Li, W.; Chen, K.; Xiang, X.-H.; Pan, R.; Li, L.; Cai, W.-X. Multi-matrices factorization with application to missing sensor data imputation. Sensors 2013, 13, 15172–15186. [Google Scholar] [CrossRef] [PubMed]
  64. Rahman, S.A.; Huang, Y.; Claassen, J.; Kleinberg, S. Imputation of missing values in time series with lagged correlations. In Proceedings of the 2014 IEEE International Conference on Data Mining Workshop (ICDMW), Shenzhen, China, 14 December 2014; pp. 753–762. [Google Scholar]
  65. Smaragdis, P.; Raj, B.; Shashanka, M. Missing Data Imputation for Time-Frequency Representations of Audio Signals. J. Signal Process. Syst. 2010, 65, 361–370. [Google Scholar] [CrossRef]
  66. Bayat, A.; Pomplun, M.; Tran, D.A. A Study on Human Activity Recognition Using Accelerometer Data from Smartphones. In Proceedings of the 9th International Conference on Future Networks and Communications (Fnc'14)/the 11th International Conference on Mobile Systems and Pervasive Computing (Mobispc'14)/Affiliated Workshops, Ontario, Canada, 17–20 August 2014; pp. 450–457. [Google Scholar]
  67. Khalifa, S.; Hassan, M.; Seneviratne, A. Feature selection for floor-changing activity recognition in multi-floor pedestrian navigation. In Proceedings of the 2014 Seventh International Conference on Mobile Computing and Ubiquitous Networking (ICMU), Singapore, 6–8 January 2014. [Google Scholar]
  68. Zhao, K.L.; Du, J.; Li, C.; Zhang, C.; Liu, H.; Xu, C. Healthy: A Diary System Based on Activity Recognition Using Smartphone. In Proceedings of the 2013 IEEE 10th International Conference on Mobile Ad-Hoc and Sensor Systems (Mass 2013), Hangzhou, China, 14–16 October 2013; pp. 290–294. [Google Scholar]
  69. Zainudin, M.N.S.; Sulaiman, M.N.; Mustapha, N.; Perumal, T. Activity Recognition based on Accelerometer Sensor using Combinational Classifiers. In Proceedings of the 2015 IEEE Conference on Open Systems (ICOS), Bandar Melaka, Malaysia, 24–26 August 2015; pp. 68–73. [Google Scholar]
  70. Fan, L.; Wang, Z.M.; Wang, H. Human activity recognition model based on decision tree. In Proceedings of the 2013 International Conference on Advanced Cloud and Big Data (CBD), Nanjing, China, 13–15 December 2013; pp. 64–68. [Google Scholar]
  71. Liu, Y.Y.; Fang, Z.; Wenhua, S.; Haiyong, Z. An Hidden Markov Model based Complex Walking Pattern Recognition Algorithm. In Proceedings of the 2016 Fourth International Conference on Ubiquitous Positioning, Indoor Navigation and Location Based Services (IEEE UPINLBS 2016), Shanghai, China, 2–4 November 2016; pp. 223–229. [Google Scholar]
  72. Piyare, R.; Lee, S.R. Mobile Sensing Platform for Personal Health Management. In Proceedings of the 18th IEEE International Symposium on Consumer Electronics (ISCE 2014), JeJu Island, South Korea, 22–25 June 2014; pp. 1–2. [Google Scholar]
  73. Chen, Y.F.; Shen, C. Performance Analysis of Smartphone-Sensor Behavior for Human Activity Recognition. IEEE Access 2017, 5, 3095–3110. [Google Scholar] [CrossRef]
  74. Vavoulas, G.; Chatzaki, C.; Malliotakis, T.; Pediaditis, M.; Tsiknakis, M. The MobiAct Dataset: Recognition of Activities of Daily Living using Smartphones. In Proceedings of the International Conference on Information and Communication Technologies for Ageing Well and E-Health (ICT4AWE), Rome, Italy, 21–22 April 2016; pp. 143–151. [Google Scholar]
  75. Torres-Huitzil, C.; Nuno-Maganda, M. Robust smartphone-based human activity recognition using a tri-axial accelerometer. In Proceedings of the 2015 IEEE 6th Latin American Symposium on Circuits & Systems (Lascas), Montevideo, Uruguay, 24–27 February 2015; pp. 1–4. [Google Scholar]
  76. Anjum, A.; Ilyas, M.U. Activity Recognition Using Smartphone Sensors. In Proceedings of the 2013 IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, USA, 11–14 January 2013; pp. 914–919. [Google Scholar]
  77. Kumar, A.; Gupta, S. Human Activity Recognition through Smartphone’s Tri-Axial Accelerometer using Time Domain Wave Analysis and Machine Learning. Int. Comput. Appl. 2015, 127, 22–26. [Google Scholar]
  78. Hon, T.K.; Wang, L.; Reiss, J.D.; Cavallaro, A. Audio Fingerprinting for Multi-Device Self-Localization. IEEE/ACM Trans. Audio, Speech Lang. Process. 2015, 23, 1623–1636. [Google Scholar] [CrossRef]
  79. Sert, M.; Baykal, B.; Yazici, A. A Robust and Time-Efficient Fingerprinting Model for Musical Audio. In Proceedings of the 2006 IEEE International Symposium on Consumer Electronics, St. Petersburg, Russia, 28 June–1 July 2007. [Google Scholar]
  80. Ramalingam, A.; Krishnan, S. Gaussian Mixture Modeling of Short-Time Fourier Transform Features for Audio Fingerprinting. IEEE Trans. Inform. Forens. Secur. 2006, 1, 457–463. [Google Scholar] [CrossRef]
  81. Vincenty, T. Direct and Inverse Solutions of Geodesics on the Ellipsoid with Application of Nested equations. Surv. Rev. 1975, 22, 88–93. [Google Scholar] [CrossRef]
  82. Karney, C.F.F. Algorithms for Geodesics. J. Geodesy 2013, 87, 43–55. [Google Scholar] [CrossRef]
  83. Karney, C.F.F.; Deakin, R.E. The calculation of longitude and latitude from geodesic measurements. Astron. Nachr. 2010, 331, 852–861. [Google Scholar] [CrossRef]
  84. Khaleghi, B.; Khamisa, A.; Karraya, F.O.; Razavib, S.N. Multisensor data fusion: A review of the state-of-the-art. Inf. Fusion 2013, 14, 28–44. [Google Scholar] [CrossRef]
  85. Pombo, N.; Bousson, K.; Araújo, P.; Viana, J. Medical decision-making inspired from aerospace multisensor data fusion concepts. Inform Health Soc. Care 2015, 40, 185–197. [Google Scholar] [CrossRef] [PubMed]
  86. Durrant-Whyte, H.; Stevens, M.; Nettleton, E. Data fusion in decentralised sensing networks. In Proceedings of the 4th International Conference on Information Fusion, Montreal, Canada, 7–10 August 2001. [Google Scholar]
  87. Tanveer, F.; Waheed, O.T.; Atiq-ur-Rehman. Design and Development of a Sensor Fusion based Low Cost Attitude Estimator. J. Space Technol. 2011, 1, 45–50. [Google Scholar]
  88. Ko, M.H.; Westa, G.; Venkatesha, S.; Kumarb, M. Using dynamic time warping for online temporal fusion in multisensor systems. Inf. Fusion 2008, 9, 370–388. [Google Scholar] [CrossRef]
  89. Singh, D.; Merdivan, E.; Psychoula, I.; Kropf, J.; Hanke, S.; Geist, M.; Holzinger, A. Human activity recognition using recurrent neural networks. In Proceedings of the International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Reggio, Italy, 29 August–1 September 2017. [Google Scholar]
  90. Zhao, L.; Wu, P.; Cao, H. RBUKF Sensor Data Fusion for Localization of Unmanned Mobile Platform. Res. J. Appl. Sci. Eng. Technol. 2013, 6, 3462–3468. [Google Scholar] [CrossRef]
  91. Walter, O.; Schmalenstroeer, J.; Engler, A.; Haeb-Umbach, R. Smartphone-based sensor fusion for improved vehicular navigation. In Proceedings of the 2013 10th Workshop on Positioning Navigation and Communication (WPNC), Dresden, Germany, 20–21 March 2013. [Google Scholar]
  92. Grunerbl, A.; Muaremi, A.; Osmani, V.; Bahle, G.; Ohler, S.; Tröster, G.; Mayora, O.; Haring, C.; Lukowicz, P. Smart-Phone Based Recognition of States and State Changes in Bipolar Disorder Patients. IEEE J. Biomed. Health Inform. 2015, 15, 140–148. [Google Scholar] [CrossRef] [PubMed]
  93. Thatte, G.; Li, M.; Lee, S.; Emken, B.A.; Annavaram, M.; Narayanan, S.; Narayanan, D.; Mitra, U. Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection. IEEE Trans. Signal Process 2011, 59, 1843–1857. [Google Scholar] [CrossRef] [PubMed]
  94. Bhuiyan, M.Z.H.; Kuusniemi, H.; Chen, L.; Pei, L.; Ruotsalainen, L.; Guinness, R.; Chen, R. Performance Evaluation of Multi-Sensor Fusion Models in Indoor Navigation. Eur. J. Navig. 2013, 11, 21–28. [Google Scholar]
  95. Bellos, C.; Papadopoulos, A.; Rosso, R.; Fotiadis, D.I. Heterogeneous data fusion and intelligent techniques embedded in a mobile application for real-time chronic disease management. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2011, 2011, 8303–8306. [Google Scholar] [PubMed]
  96. Ayachi, F.S.; Nguyen, H.; Goubault, E.; Boissy, P.; Duval, C. The use of empirical mode decomposition-based algorithm and inertial measurement units to auto-detect daily living activities of healthy adults. IEEE Trans. Neural Syst. Rehabilit. Eng. 2016, 24, 1060–1070. [Google Scholar] [CrossRef] [PubMed]
  97. Debes, C.; Merentitis, A.; Sukhanov, S.; Niessen, M.; Frangiadakis, N.; Bauer, A. Monitoring activities of daily living in smart homes: Understanding human behavior. IEEE Signal Process. Mag. 2016, 33, 81–94. [Google Scholar] [CrossRef]
  98. Koza, J.R.; Bennett, F.H.; Andre, D.; Keane, M.A. Automated design of both the topology and sizing of analog electrical circuits using genetic programming. In Artificial Intelligence in Design’96; Springer: Berlin, Germany, 1996; pp. 151–170. [Google Scholar]
  99. Russell, S.; Norvig, P.; Canny, C.F.; Malik, J.M.A. Artificial Intelligence: A Modern Approach; Prentice Hall: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  100. Du, K.-L.; Swamy, M.N.S. Fundamentals of Machine Learning. In Neural Networks and Statistical Learning; Springer: Berlin, Germany, 2014; pp. 15–65. [Google Scholar]
  101. Zhang, Y.; Rajapakse, J.C. Machine Learning in Bioinformatics; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  102. Witten, I.H.; Frank, E.; Hall, M.A. Data Mining: Practical Machine Learning Tools and Techniques; Morgan Kaufmann: Burlington, MA, USA, 2016. [Google Scholar]
  103. Schapire, R.E. The boosting approach to machine learning: An overview. In Nonlinear Estimation and Classification; Springer: Berlin, Germany, 2003; pp. 149–171. [Google Scholar]
  104. Michalski, R.S.; Carbonell, J.G.; Mitchell, T.M.X. Machine Learning: An Artificial Intelligence Approach; Springer Science & Business Media: Berlin, Germany, 2013. [Google Scholar]
  105. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin, Germany, 2006. [Google Scholar]
  106. Lorenzi, P.; Rao, R.; Romano, G.; Kita, A.; Irrera, F. Mobile Devices for the Real-Time Detection of Specific Human Motion Disorders. IEEE Sens. J. 2016, 16, 8220–8227. [Google Scholar]
  107. Lau, S.L.; König, I.; David, K.; Parandian, B.; Carius-Düssel, C.; Schultz, M. Supporting patient monitoring using activity recognition with a smartphone. In Proceedings of the 2010 7th International Symposium on Wireless Communication Systems (ISWCS), York, UK, 19–22 September 2010. [Google Scholar]
  108. Lau, S.L. Comparison of orientation-independent-based-independent-based movement recognition system using classification algorithms. In Proceedings of the 2013 IEEE Symposium on Wireless Technology and Applications (ISWTA), Kuching, Malaysia, 22–25 September 2013. [Google Scholar]
  109. Duarte, F.; Lourenco, A.; Abrantes, A. Activity classification using a smartphone. In Proceedings of the 2013 IEEE 15th International Conference on e-Health Networking, Applications & Services (Healthcom), Lisbon, Portugal, 9–12 October 2013. [Google Scholar]
  110. Fahim, M.; Lee, S.; Yoon, Y. SUPAR: Smartphone as a ubiquitous physical activity recognizer for u-healthcare services. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2014, 2014, 3666–3669. [Google Scholar] [PubMed]
  111. Bajpai, A.; Jilla, V.; Tiwari, V.N.; Venkatesan, S.M.; Narayanan, R. Quantifiable fitness tracking using wearable devices. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015. [Google Scholar]
  112. Nguyen, P.; Akiyama, T.; Ohashi, H.; Nakahara, G.; Yamasaki, K.; Hikaru, S. User-friendly Activity Recognition Using SVM Classifier and Informative Features. In Proceedings of the 2015 International Conference on Indoor Positioning and Indoor Navigation (Ipin), Banff, AB, Canada, 13–16 October 2015; pp. 1–8. [Google Scholar]
  113. Wang, C.; Xu, Y.; Zhang, J.; Yu, W. SW-HMM: A Method for Evaluating Confidence of Smartphone-Based Activity Recognition. In Proceedings of the 2016 IEEE Trustcom/BigDataSE/ISPA, Tianjin, China, 23–26 August 2016. [Google Scholar]
  114. Lau, S.L.; David, K. Movement recognition using the accelerometer in smartphones. In Proceedings of the Future Network and Mobile Summit, Florence, Italy, 16–18 June 2010. [Google Scholar]
  115. Zhang, L.; Wu, X.; Luo, D. Real-Time Activity Recognition on Smartphones Using Deep Neural Networks. In Proceedings of the Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), Beijing, China, 10–14 August 2015. [Google Scholar]
  116. Cardoso, N.; Madureira, J.; Pereira, N. Smartphone-based Transport Mode Detection for Elderly Care. In Proceedings of the IEEE 18th International Conference on E-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–16 September 2016; pp. 261–266. [Google Scholar]
  117. Vallabh, P.; Malekian, R.; Ye, N.; Bogatinoska, D.C. Fall Detection Using Machine Learning Algorithms. In Proceedings of the 24th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 22–24 September 2016; pp. 51–59. [Google Scholar]
  118. Filios, G.; Nikoletseas, S.; Pavlopoulou, C.; Rapti, M.; Ziegler, S. Hierarchical Algorithm for Daily Activity Recognition via Smartphone Sensors. In Proceedings of the 2015 IEEE 2nd World Forum on Internet of Things (WF-IOT), Milan, Italy, 14–16 December 2015; pp. 381–386. [Google Scholar]
  119. Tang, C.X.; Phoha, V.V. An Empirical Evaluation of Activities and Classifiers for User Identification on Smartphones. In Proceedings of the 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Niagara Falls, NY, USA, 6–9 September 2016; pp. 1–8. [Google Scholar]
  120. Li, P.; Wang, Y.; Tian, Y.; Zhou, T.S.; Li, J.S. An Automatic User-Adapted Physical Activity Classification Method Using Smartphones. IEEE Trans. Biomed. Eng. 2017, 64, 706–714. [Google Scholar] [CrossRef] [PubMed]
  121. Kim, Y.J.; Kang, B.N.; Kim, D. Hidden Markov Model Ensemble for Activity Recognition using Tri-axis Accelerometer. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics (Smc 2015): Big Data Analytics for Human-Centric Systems, Kowloon, China, 9–12 October 2015; pp. 3036–3041. [Google Scholar]
  122. Brdiczka, O.; Bellotti, V. Identifying routine and telltale activity patterns in knowledge work. In Proceedings of the Fifth IEEE International Conference on Semantic Computing (ICSC), Palo Alto, CA, USA, 18–21 September 2011. [Google Scholar]
  123. Costa, Â.; Castillo, J.C.; Novais, P.; Fernández-Caballero, A.; Simoes, R. Sensor-driven agenda for intelligent home care of the elderly. Exp. Syst. Appl. 2012, 39, 12192–12204. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.