An Ambient Intelligence-Based Human Behavior Monitoring Framework for Ubiquitous Environments
Abstract
:1. Introduction
- A rise in cost of healthcare: at present, the treatment of older adults’ accounts for 40% of the total healthcare costs in the United States even though older adults account for around 13% of the total population.
- Diseases affecting greater percentage of the population: with the increasing elderly population, there will be an increased number of people with diseases like Parkinson’s and Alzheimer’s, for which there is yet to be a proper and definitive cure.
- Decreased caregiver population: the rate of increase of caregivers is not as high as the increasing rate of the elderly population.
- Quality of caregiving: caregivers would be required to look after multiple older adults, and quite often they might not have the time, patience, or energy to meet the expectations of caregiving or to address the emotional needs of the elderly.
- Dependency needs: with multiple physical, emotional, and cognitive issues associated with aging, a significant percentage of the elderly population would be unable to live independently.
- Societal impact: the need for the development of assisted living and nursing facilities to address healthcare-related needs.
- It provides a novel approach to perform the semantic analysis of user interactions on the diverse contextual parameters during ADLs in order to identify a list of distinct behavioral patterns associated with different complex activities performed in an IoT-based environment. These behavioral patterns include walking, sleeping, sitting, and lying. This functionality was developed and implemented by using a k-nearest neighbor algorithm (k-NN) classifier. The performance accuracy of this approach was found to be 76.71% when it was evaluated on a dataset of ADLs.
- It provides a novel intelligent decision-making algorithm that can analyze such distinct behavioral patterns associated with different complex activities and their relationships with the dynamic contextual and spatial features of the environment in order to detect any anomalies in user behavior that could constitute an emergency, such as a fall or unconsciousness. This algorithm was developed and implemented by using a k-NN classifier, and it achieved an overall performance accuracy of 83.87% when tested on a dataset of ADLs.
2. Literature Review
3. Proposed Work
- Deploy both wireless and wearable sensors to develop an IoT-based interconnected environment.
- Set up a data collection framework to collect the big data from these sensors during different ADLs performed in the confines of a given IoT-based space.
- Use context-based user interaction data obtained from the wireless sensors to spatially map a given environment into distinct ‘zones,’ in terms of context attributes associated with distinct complex activities. Here, we define a ‘zone’ as a region in the user’s spatial orientation where distinct complex activities take place. For instance, in the cooking zone, the complex activity of cooking could take place, but other complex activities like sleeping or taking a shower could not.
- Analyze the atomic activities performed on different context attributes for a given complex activity, along with their characteristic features.
- Track user behavior in terms of joint point movements and joint point characteristics [31] for each atomic activity associated with any given complex activity.
- Analyze the user behavior, atomic activities, and context attributes to form a general definition of a complex activity in each context-based spatial ‘zone.’
- Repeat (vi) for all the complex activities with respect to the context attributes as obtained from (iii) for a given IoT-based environment.
- Analyze the activity definitions to find atomic activities and their characteristic features for all the complex activities associated with the different ‘zones.’
- Study the activity definitions to record the human behavior for all the atomic activities obtained from (viii).
- Analyze the behavior definitions in terms of joint point movements and characteristics to develop a knowledge base of common behaviors associated with all the complex activities in the different ‘zones.’
- Develop a dataset that consists of all these behavioral patterns and the big data from user interactions for each of these ‘zones’ in a given IoT-based environment.
- Preprocess the data to detect and eliminate outliers and any noise prior to developing a machine learning model.
- Split the data into training and test sets and then test the machine learning model on the test set to evaluate its performance characteristics.
- Classify the complex activities from this dataset as per their relationships with atomic activities, context attributes, other atomic activities, other context attributes, core atomic activities, core context attributes, start atomic activities, end atomic activities, start context attributes, and end context attributes to develop semantic characteristics of complex activities.
- Track user movements to detect start atomic activities and start context attributes.
- If these detected start atomic activities and start context attributes match with the semantic characteristics of complex activities in the database, run the following algorithm: emergency detection from semantic characteristics of complex activities (EDSCCA).
- If these detected start atomic activities and start context attributes do not match with the semantic characteristics of complex activities in the knowledge base, then track the atomic activities, context attributes, other atomic activities, other context attributes, core atomic activities, core context attributes, start atomic activities, end atomic activities, start context attributes, and end context attributes to develop a semantic definition for a complex activity (SDCA).
- If an SDCA is already present in the knowledge base, go to (vi), else update the database with the SDCA.
- Develop a dataset that consists of all these semantic definitions for complex activities and the big data from user interactions associated with them.
- Preprocess the data to detect and eliminate outliers and any noise prior to developing a machine learning model.
- Split the data into training and test sets and then test the machine learning model on the test set to evaluate its performance characteristics.
- Track if the start atomic activity was performed on the start context attribute.
- Track if the end atomic activity was performed on the end context attribute.
- If (i) is true and (ii) is false:
- Track all the atomic activities, context attributes, other atomic activities, other context attributes, core atomic activities, and core context attributes.
- For any atomic activity or other atomic activity that does not match its associated context attribute, track the features of the user behavior.
- If the user behavior features indicate lying and no other atomic activities are performed, the inference is an emergency.
- If (i) is true and (ii) is true:
- The user successfully completed the activity without any emergency detected, so the inference is no emergency.
- If (i) is false and (ii) is true:
- Track all the atomic activities, context attributes, other atomic activities, other context attributes, core atomic activities, and core context attributes.
- For any atomic activity or other atomic activity that does not match its associated context attribute, track the features of the user behavior.
- If the user behavior features indicate lying and no other atomic activities performed, the inference is an emergency.
- If (i) is false and (ii) is false:
- No features of human behavior were associated with the observed activities or, in other words, the user did not perform any activity, so the inference is no emergency.
4. Results
5. Comparative Discussion
- Several researchers in this field have only focused on activity recognition and that too at a superficial level. Various methodologies such as sensor technology-driven [9], RGB frame-based [10], hidden Markov model-based [11], and computer vision-based [12] methodologies have been proposed by researchers, but the main limitation of such systems is their inability to analyze complex activities at a fine-grain level to interpret the associated dynamics of user interactions and their characteristic features. Our framework addresses this challenge by being able to perform the semantic analysis of user interactions with diverse contextual parameters during ADLs. By semantic analysis, we refer to the functionalities of our framework to (1) analyze complex activities in terms of the associated postures and gestures, which are interpreted in terms of the skeletal joint point characteristics (Figure 1); (2) interpret the interrelated and interdependent relationships between atomic activities, context attributes, core atomic activities, core context attributes, other atomic activities, other context attributes, start atomic activities, end atomic activities, start context attributes, and end context attributes associated with any complex activity (Table 1); (3) detect all possible dynamics of user interactions and user behavior that could be associated with any complex activity (Table 2); (4) identify a list of distinct fine-grain level behavioral patterns—walking, sleeping, sitting, and lying—associated with different complex activities (Figure 9 and Figure 11), which achieved a performance accuracy of 76.71% when tested on a dataset of ADLs (Figure 12 and Figure 13); and (5) use an intelligent decision-making algorithm that can analyze these distinct behavioral patterns and their relationships with the dynamic contextual and spatial features of the environment to detect any anomalies in user behavior that could constitute an emergency (Figure 14 and Figure 16), which achieved an overall performance accuracy of 83.87% when tested on a dataset of ADLs (Figure 17 and Figure 18).
- Some of the recent works that have focused on activity analysis were limited to certain tasks and could not be generalized for different activities. For instance, in [17], the work focused on eating activity recognition and analysis; in [13], the activity analysis was done to detect enter and exit motions only in a given IoT-based space. In [18], the methodology focused on the detection of simple and less complicated activities, such an cooking, and [22] presented a system that could remind its users to take their routine medications. The analysis of such small tasks and actions are important, but the challenge in this context is the fact that these systems are specific to such tasks and cannot be deployed or implemented in the context of other activities. With its functionalities to perform complex activity recognition and analysis of skeletal joint point characteristics, our framework can analyze and interpret any complex activity and its associated tasks and actions, thereby addressing this challenge. When tested on a dataset, our framework was able to recognize and analyze all nine complex activities—sleeping, changing cloth, relaxing, moving around, cooking, eating, emergency, working, and defecating—that were associated with this dataset. It is worth mentioning here that our framework cannot only recognize these specific nine complex activities, because its characteristics allow it to recognize and analyze any set of complex activities represented by the big data associated with user interactions in a given IoT-based context, which could be a from a dataset or from a real-time sensor-based implementation of the IoT framework.
- A number of these methodologies have focused on activities in specific settings and cannot be seamlessly deployed in other settings consisting of different context parameters and environment variables. For instance, in [14,16], the presented systems are specific to hospital environments, the methodology presented in [21] is only applicable to a kitchen environment, and the approach in [28] is only applicable to a workplace environment. While such systems are important for safe and assisted living experiences in these local spatial contexts, their main drawback is the fact that these tools are dependent on the specific environmental settings for which they have been designed. Our framework develops an SDCA by analyzing the multimodal components of user interactions on the context parameters, from an object centered perspective, as outlined in Section 3. This functionality allows our framework to detect and interpret human activities, their associated behavioral patterns, and the user interaction features in any given setting consisting of any kind of context attributes and environment variables.
- Video-based systems for activity recognition and analysis, such as [12,19] may have several drawbacks associated with their development, functionalities, and performance metrics. According to [39], video ‘presents challenges at almost every stage of the research process.’ Some of these are the categorization and transcription of data, the selection of relevant fragments, the selection of camera angle, and the determination of the number of frames. By not using viewer-centered image analysis but by using object centered data directly from the sensors, our proposed framework bypasses all these challenges.
- Some of the frameworks that have focused on fall detection are dependent on a specific operating system or platform or device. These include the smartphone-based fall detection approach proposed in [23] that uses an Android operating system, the work presented in [29] that uses an IOS operating system, the methodology proposed in [26] that requires a smart cane, and the approach in [15] that requires a handheld device. To address universal diversity and ensure the wide-scale user acceptance of such technologies, it is important that such fall detection systems are platform-independent and can run seamlessly on any device that uses any kind of operating system. Our framework does not have this drawback because it does not need an Android or IOS operating system or any specific device for running. Even though it uses RapidMiner as a software tool to develop its characteristic features, RapidMiner is written in Java—which is platform-independent. RapidMiner allows for the exportation of any process in the form of the associated Java code. Java applications are known as write once run anywhere (WORA). This essentially means that when a Java application is developed and compiled on any system, the Java compiler generates a bytecode or class file that is platform-independent and can be run seamlessly on any other system without re-compilation by using a Java virtual machine (JVM). Additionally, RapidMiner also consists of multiple extensions that can be added to a RapidMiner process and used to seamlessly integrate a RapidMiner process with other applications or software based on the requirements.
- Several fall detection systems are dependent on external parameters that cannot be controlled and could affect the performance characteristics. For instance, Shao et al. [25] proposed a fall detection methodology based on measuring the vibrations of the floor. Several factors such as the weight of the user, the material of the floor, the condition of the floor, and other objects placed on the floor can impact the intensity of vibrations that could affect the performance of the system. Kong et al.’s [24] system used the distance between the neck of the user and the ground to detect falls. The performance of such a system could be affected by the height of the user, the posture of the user, and any elevations on the ground such as high objects or stairs. The work proposed in [27] by Keaton et al., which used WiFi channel state data to detect falls, could be affected by external factors that tend to influence the WiFi channel state data. Similarly, the methodology developed in [20] worked by using an air pressure sensor, the readings of which could be affected by environmental factors and external phenomena. Such influences or effects of external conditions could have a negative effect on the operational and performance characteristics of the system, and it could even lead to false alarms, thus causing alert fatigue [40] in caregivers and medical personnel. Such false alarms and alert fatigue can decrease the quality of care, increase response time, and make caregivers and medical personnel insensitive to the warnings of such fall detection systems. The challenge is therefore to ensure that such fall detection systems can seamlessly function without being dependent on external factors that could affect its operation or performance metrics. Our framework uses concepts of complex activity recognition [30] and two related works [31,32], as well as taking the context-driven approach outlined in Section 3, for the analysis of diverse components of user interactions performed on context parameters to interpret the dynamics of human behavior and their relationships with the contextual and spatial features of an environment to detect any anomalies that could constitute an emergency. The performance, operation, and functionality of such an approach is independent of the effect of any external factors or conditions, such as floor vibrations, WiFi channel state data, and the distance between the user and the ground.
6. Conclusions and Scope for Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- World Population Ageing: 1950–2050. ISBN-10: 9210510925. Available online: http://globalag.igc.org/ruralaging/world/ageingo.htm (accessed on 17 October 2020).
- Esnaola, U.; Smithers, T. Whistling to Machines. In Ambient Intelligence in Everyday Life, Lecture Notes in Computer Science Series; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3864, pp. 198–226. [Google Scholar]
- Rashidi, P.; Mihailidis, A. A Survey on Ambient-Assisted Living Tools for Older Adults. IEEE J. Biomed. Health Inform. 2013, 17, 579–590. [Google Scholar] [CrossRef] [PubMed]
- Sadri, F. Ambient intelligence: A survey. Acm Comput. Surv. 2011, 36, 1–66. [Google Scholar] [CrossRef]
- Thakur, N. Framework for a Context Aware Adaptive Intelligent Assistant for Activities of Daily Living. Master’s Thesis, University of Cincinnati, Cincinnati, OH, USA, 2019. Available online: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1553528536685873 (accessed on 10 December 2020).
- Thakur, N.; Han, C.Y. A Review of Assistive Technologies for Activities of Daily Living of Elderly, Book Chapter in: Assisted Living: Current Issues and Challenges; Nova Science Publishers: Hauppauge, NY, USA, 2020; pp. 61–84. ISBN 978-1-53618-446-4. [Google Scholar]
- Katz, S.; Downs, T.D.; Cash, H.R.; Grotz, R.C. Progress in development of the index of ADL. Gerontologist 1970, 10, 20–30. [Google Scholar] [CrossRef] [PubMed]
- Debes, C.; Merentitis, A.; Sukhanov, S.; Niessen, M.; Frangiadakis, N.; Bauer, A. Monitoring Activities of Daily Living in Smart Homes: Understanding human behavior. IEEE Signal. Process. Mag. 2016, 33, 81–94. [Google Scholar] [CrossRef]
- Azkune, G.; Almeida, A.; López-de-Ipiña, D.; Liming, C. Extending knowledge driven activity models through data-driven learning techniques. Expert Syst. Appl. 2016, 42, 3115–3128. [Google Scholar] [CrossRef]
- Neili Boualia, S.; Essoukri Ben Amara, N. Deep Full-Body HPE for Activity Recognition from RGB Frames Only. J. Inform. 2021, 8, 2. [Google Scholar]
- Van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B. Accurate activity recognition in a home setting. In Proceedings of the 10th International Conference on Ubiquitous Computing, Seoul, Korea, 21–24 September 2008; pp. 1–9. [Google Scholar]
- Cheng, Z.; Qin, L.; Huang, Q.; Jiang, S.; Yan, S.; Tian, Q. Human Group Activity Analysis with Fusion of Motion and Appearance Information. In Proceedings of the 19th ACM International Conference on Multimedia, Scottsdale, AZ, USA, 28 November–1 December 2011; pp. 1401–1404. [Google Scholar]
- Skocir, P.; Krivic, P.; Tomeljak, M.; Kusek, M.; Jezic, G. Activity detection in smart home environment. Procedia Comput. Sci. 2016, 96, 672–681. [Google Scholar] [CrossRef] [Green Version]
- Doryab, A.; Bardram, J.E. Designing Activity-aware Recommender Systems for Operating Rooms. In Proceedings of the 2011 Workshop on Context-awareness in Retrieval and Recommendation, Association for Computing Machinery, New York, NY, USA, 13 February 2011; pp. 43–46. [Google Scholar]
- Abascal, J.; Bonail, B.; Marco, A.; Sevillano, J.L. AmbienNet: An Intelligent Environment to Support People with Disabilities and Elderly People. In Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility, Halifax, NS, Canada, 13–15 October 2008; pp. 293–294. [Google Scholar]
- Chan, M.; Campo, E.; Bourennane, W.; Bettahar, F.; Charlon, Y. Mobility Behavior Assessment Using a Smart-Monitoring System to Care for the Elderly in a Hospital Environment. In Proceedings of the 7th International Conference on Pervasive Technologies Related to Assistive Environments, Island of Rhodes, Greece, 27–30 May 2014; Article 51. pp. 1–5. [Google Scholar]
- Rashid, N.; Dautta, M.; Tseng, P.; Al Faruque, M.A. HEAR: Fog-Enabled Energy-Aware Online Human Eating Activity Recognition. IEEE Internet Things J. 2021, 8, 860–868. [Google Scholar] [CrossRef]
- Siraj, M.S.; Shahid, O.; Ahad, M.A.R. Cooking Activity Recognition with Varying Sampling Rates Using Deep Convolutional GRU Framework. In Human Activity Recognition Challenge; Springer: Berlin/Heidelberg, Germany, 2021; pp. 115–126. ISBN 978-981-15-8269-1. [Google Scholar]
- Mishra, O.; Kavimandan, P.S.; Tripathi, M.M.; Kapoor, R.; Yadav, K. Human Action Recognition Using a New Hybrid Descriptor. In Advances in VLSI, Communication, and Signal Processing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 527–536. ISBN 978-981-15-6840-4. [Google Scholar]
- Fu, Z.; He, X.; Wang, E.; Huo, J.; Huang, J.; Wu, D. Personalized Human Activity Recognition Based on Integrated Wearable Sensor and Transfer Learning. J. Sens. 2021, 21, 885. [Google Scholar] [CrossRef] [PubMed]
- Yared, R.; Abdulrazak, B.; Tessier, T.; Mabilleau, P. Cooking risk analysis to enhance safety of elderly people in smart kitchen. In Proceedings of the 8th ACM International Conference on Pervasive Technologies Related to Assistive Environments, Corfu, Greece, 1–3 July 2015; Article 12. pp. 1–4. [Google Scholar]
- Angelini, L.; Nyffeler, N.; Caon, M.; Jean-Mairet, M.; Carrino, S.; Mugellini, E.; Bergeron, L. Designing a Desirable Smart Bracelet for Older Adults. In Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing, Zurich, Switzerland, 8–12 September 2013; pp. 425–433. [Google Scholar]
- Dai, J.; Bai, X.; Yang, Z.; Shen, Z.; Xuan, D. PerFallD: A Pervasive Fall Detection System Using Mobile Phones. In Proceedings of the 8th IEEE International Conference on Pervasive Computing and Communications Workshops, Mannheim, Germany, 29 March–2 April 2010; pp. 292–297. [Google Scholar]
- Kong, X.; Meng, Z.; Meng, L.; Tomiyama, H. A Neck-Floor Distance Analysis-Based Fall Detection System Using Deep Camera. In Advances in Artificial Intelligence and Data Engineering; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1113–1120. ISBN 978-981-15-3514-7. [Google Scholar]
- Shao, Y.; Wang, X.; Song, W.; Ilyas, S.; Guo, H.; Chang, W.-S. Feasibility of Using Floor Vibration to Detect Human Falls. Int. J. Environ. Res. Public Health 2021, 18, 200. [Google Scholar] [CrossRef] [PubMed]
- Chou, H.; Han, K. Developing a smart walking cane with remote electrocardiogram and fall detection. J. Intell. Fuzzy Syst. 2021, 1–14, (Pre-press). [Google Scholar]
- Keaton, M.; Nordstrom, A.; Sherwood, M.; Meck, B.; Henry, G.; Alwazzan, A.; Reddy, R. WiFi-based In-home Fall-detection Utility: Application of WiFi Channel State Information as a Fall Detection Service. In Proceedings of the 2020 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC), Cardiff, UK, 15–17 June 2020; pp. 1–6. [Google Scholar]
- Anceschi, E.; Bonifazi, G.; De Donato, M.C.; Corradini, E.; Ursino, D.; Virgili, L. SaveMeNow.AI: A Machine Learning Based Wearable Device for Fall Detection in a Workplace. In Enabling AI Applications in Data Science; Springer: Berlin/Heidelberg, Germany, 2021; pp. 493–514. ISBN 978-3-030-52067-0. [Google Scholar]
- Mousavi, S.A.; Heidari, F.; Tahami, E.; Azarnoosh, M. Fall detection system via smart phone and send people location. In Proceedings of the 2020 28th European Signal. Processing Conference (EUSIPCO), Amsterdam, The Netherlands, 18–22 January 2021; pp. 1605–1607. [Google Scholar]
- Saguna, S.; Zaslavsky, A.; Chakraborty, D. Complex Activity Recognition Using Context-Driven Activity Theory and Activity Signatures. Acm Trans. Comput. Hum. Interact. 2013, 20, 32. [Google Scholar] [CrossRef]
- Thakur, N.; Han, C.Y. Towards A Language for Defining Human Behavior for Complex Activities. In Proceedings of the 3rd International Conference on Human Interaction and Emerging Technologies, Paris, France, 27–29 August 2020; pp. 309–315. [Google Scholar]
- Thakur, N.; Han, C.Y. Towards a Knowledge Base for Activity Recognition of Diverse Users. In Proceedings of the 3rd International Conference on Human Interaction and Emerging Technologies, Paris, France, 27–29 August 2020; pp. 303–308. [Google Scholar]
- Biggs, N.L. The roots of combinatorics. Hist. Math. 1979, 6, 109–136. [Google Scholar] [CrossRef] [Green Version]
- Tabbakha, N.E.; Ooi, C.P.; Tan, W.H. A Dataset for Elderly Action Recognition Using Indoor Location and Activity Tracking Data. Available online: https://doi.org/10.17632/sy3kcttdtx.1 (accessed on 10 August 2020).
- Mierswa, I.; Wurst, M.; Klinkenberg, R.; Scholz, M.; Euler, T. YALE: Rapid prototyping for complex data mining tasks. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and Data Mining (KDD’ 06), Pennsylvania, PI, USA, 20–23 August 2006; pp. 935–940. [Google Scholar]
- K-Nearest Neighbors Algorithm. Available online: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm (accessed on 1 February 2021).
- Euclidean Distance. Available online: https://en.wikipedia.org/wiki/Euclidean_distance (accessed on 1 February 2021).
- Confusion Matrix. Available online: https://en.wikipedia.org/wiki/Confusion_matrix (accessed on 1 February 2021).
- Luff, P.; Heath, C. Some ‘technical challenges’ of video analysis: Social actions, objects, material realities and the problems of perspective. Qual. Res. 2012, 12, 255–279. [Google Scholar] [CrossRef]
- Alarm Fatigue. Available online: https://en.wikipedia.org/wiki/Alarm_fatigue (accessed on 2 February 2021).
Atomic Activities | Context Attributes | Joint Points Pairs That Experience Change |
---|---|---|
At1: Standing (0.08) | Ct1: Lights on (0.08) | No change |
At2: Walking towards dining table (0.20) | Ct2: Dining area (0.20) | (13,17), (14,18), (15,19), and (16,20) |
At3: Serving food on a plate (0.25) | Ct3: Food present (0.25) | (7, 11) and (8,12) |
At4: Washing hand/using hand sanitizer (0.20) | Ct4: Plate present (0.20) | (7, 11) and (8,12) |
At5: Sitting down (0.08) | Ct5: Sitting options available (0.08) | No change |
At6: Starting to eat (0.19) | Ct6: Food quality and taste (0.19) | (6,3), (7,3), (8,3), (6,4), (7,4), (8,4) or (10,3), (11,3), (12,3), (10,4), (11,4), and (12,4) |
Complex Activity Characteristics | Value(s) |
---|---|
Ati, all the atomic activities related to the complex activity | At1, At2, At3, At4, At5, and At6 |
Cti, all the context attributes related to the complex activity | Ct1, Ct2, Ct3, Ct4, Ct5, and Ct6 |
AtS, list of all the Ati that are start atomic activities | At1 and At2 |
CtS, list of all the Cti that are start context attributes | Ct1 and Ct2 |
AtE, list of all the Ati that are end atomic activities | At5 and At6 |
CtE, list of all the Cti that are end context attributes | Ct5 and Ct6 |
γAt, list of all the Ati that are core atomic activities | At2, At3, At4, and At6 |
ρCt, list of all the Cti that are core context attributes | Ct2, Ct3, Ct4, and Ct6 |
at, number of Ati related to the complex activity | 6 |
bt, number of Cti related to the complex activity | 6 |
ct, number of γAt related to the complex activity | 4 |
dt, number of ρCt related to the complex activity | 4 |
α, all possible ways by which any complex activity can be performed including false starts | 64 |
β, all the ways of performing any complex activity where the user always reaches the end goal | 4 |
γ, all the ways of performing any complex activity where the user never reaches the end goal | 60 |
Attribute Name | Description |
---|---|
Row No | The row number in the output table |
Activity | The actual behavioral pattern associated with a given ADL |
Prediction (Activity) | The predicted behavioral pattern associated with a given ADL |
Confidence (lying) | The degree of certainty that the user was lying during this ADL |
Confidence (standing) | The degree of certainty that the user was standing during this ADL |
Confidence (sitting) | The degree of certainty that the user was sitting during this ADL |
Confidence (walking) | The degree of certainty that the user was sitting during this ADL |
Attribute Name | Description |
---|---|
Row No | The row number in the output table |
Complex Activity | The actual user behavior (either emergency or non-emergency) associated with a given complex activity (ADL) |
Prediction (Complex Activity) | The predicted user behavior (either emergency or non-emergency) associated with a given complex activity (ADL) |
Confidence (Non-Emergency) | The degree of certainty that the user behavior associated with a given complex activity did not constitute an emergency |
Confidence (Emergency) | The degree of certainty that the user behavior associated with a given complex activity constituted an emergency |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Thakur, N.; Han, C.Y. An Ambient Intelligence-Based Human Behavior Monitoring Framework for Ubiquitous Environments. Information 2021, 12, 81. https://doi.org/10.3390/info12020081
Thakur N, Han CY. An Ambient Intelligence-Based Human Behavior Monitoring Framework for Ubiquitous Environments. Information. 2021; 12(2):81. https://doi.org/10.3390/info12020081
Chicago/Turabian StyleThakur, Nirmalya, and Chia Y. Han. 2021. "An Ambient Intelligence-Based Human Behavior Monitoring Framework for Ubiquitous Environments" Information 12, no. 2: 81. https://doi.org/10.3390/info12020081
APA StyleThakur, N., & Han, C. Y. (2021). An Ambient Intelligence-Based Human Behavior Monitoring Framework for Ubiquitous Environments. Information, 12(2), 81. https://doi.org/10.3390/info12020081