Next Article in Journal
The Use of Film in Geography and History Classes: A Theoretical Approach
Previous Article in Journal
UCAmI Cup. Analyzing the UJA Human Activity Recognition Dataset of Activities of Daily Living
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Real-time Recognition of Interleaved Activities Based on Ensemble Classifier of Long Short-Term Memory with Fuzzy Temporal Windows †

1
Department of Computer Science, University of Jaén, Campus Las Lagunillas, 23071 Jaén, Spain
2
School of Computing, Ulster University, Newtownabbey, Co. Antrim, Northern Ireland BT15 1ED, UK
3
Department of Computer Science, University of Cádiz, Calle Ancha 16, 11001 Cádiz, Spain
*
Author to whom correspondence should be addressed.
Presented at the 12th International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI 2018), Punta Cana, Dominican Republic, 4–7 December 2018.
Proceedings 2018, 2(19), 1225; https://doi.org/10.3390/proceedings2191225
Published: 26 October 2018
(This article belongs to the Proceedings of UCAmI 2018)

Abstract

:
In this paper, we present a methodology for Real-Time Activity Recognition of Interleaved Activities based on Fuzzy Logic and Recurrent Neural Networks. Firstly, we propose a representation of binary-sensor activations based on multiple Fuzzy Temporal Windows. Secondly, an ensemble of activity-based classifiers for balanced training and selection of relevant sensors is proposed. Each classifier is configured as a Long Short-Term Memory with self-reliant detection of interleaved activities. The proposed approach was evaluated using well-known interleaved binary-sensor datasets comprised of activities of daily living.

1. Introduction

Globally, the population demographics are gradually shifting from younger to older age groups meaning elderly care is becoming unsustainable [1]. In order to relieve some workload from carers, whilst encouraging older adults to remain independent at home, assistive systems have been developed [2]. Some of these systems are capable of identifying simple to complex human activities, as well as the context in which they occur, from sensor data; this is currently a core aspect of smart assistive technologies [3]. Activity recognition assists with identifying the tasks being carried out and can determine whether the occupant has any difficulties with completing tasks or daily activities [4]. When tasks and scenarios become more complex, they are referred to as interleaved activities [5].
In this paper, we address the real-time recognition of interleaved activities. Real time refers to the recognition of activities while they are taking place; new sensor events are recorded, whilst streaming, without including explicit information on the evaluations labelled time interval [6]. In this way, the methodology presented in this work faces two key problems: (i) learning from activities which are developed in any order, interweaving and performing tasks in parallel if desired [7]; and (ii) recognizing activities in real time, without including explicit information on the future sensor events [8].
Activity recognition is the process of retrieving high level knowledge about activities and occurrences taking place in an environment such as a smart home; whilst also learning about the behaviour of those present in the environment [9].
Real-time activity recognition is a challenging area of smart home technology [8]. The main difficulty with real-time activity recognition approaches is the ability to correctly define the size of the temporal window to allow effective recognition of activities [6,10]. The main concern of including a single sliding window is that the more sensor events from the past are included, the more noise the data representation includes on the model [11]. In this work, the use of multiple temporal windows and fuzzy aggregation methods are proposed to enable the long and middle-term evaluation of sensors.
Furthermore, daily activity datasets suffer from a severe class imbalance problem [12,13], which are presented when their classes are not equally represented [14]. We balance the training dataset for each activity classifier in order to solve the imbalance problem within datasets.
In the context of interleaved activities, a scarce number of approaches face this complex problem. In [15] authors proposed a multi-layer model for activity recognition, using RFID technology and appliance signatures to identify errors related to cognitive decline from daily activities. Focusing on morning routine activities, they carried out activity recognition in real time through RFID based localization and through the use of electrical sensors. It was found that using multiple sensors increased the accuracy of recognition, however, interleaved activities were not considered. In [16], authors addressed this challenge using a data set collected from an elderly person living alone by means of an event-driven approach [6]. It was found that although they were able to effectively distinguish concurrent activities, more event factors could have been used for better accuracy in results, as only the timing and the sensor were taken in to account when the location could have been added.
In [17], authors present an approach through ontological and probabilistic reasoning, which requires a relevant knowledge engineering effort to define a comprehensive ontology of activities, home environment, and sensor events. This work does not include real-time evaluation presenting a results based on sensor events of 81%.
In this work, we face to novelty challenge of the real-time recognition of interleaved activities, which present a hard problem due to: (i) the evaluation in real-time while the activities are being developed without including future information of sensors; and (ii) the concurrence and disorder of activities performed by the participants.
The remainder of the paper is structured as follows: the Section 2 details an overview of the methodology proposed. In Section 3, we highlight the experiments performed, of which are discussed in Section 4. Finally, the Conclusions and Ongoing Works are presented in Section 5, providing a critique of the study overall and proposing plans for future work.

2. Methodology

In this section, we detail the proposed methodology for recognizing interleaved human activities in real-time, using an ensemble classifier of Long Short-Term Memory, with Fuzzy Temporal Windows. It is based on a previous methodology for sequential learning of activity recognition [18]. This work is based around the following: (i) the concurrent activation in a parallel way of the ensemble of activity-based classifiers to provide a suitable interleaved response; and (ii) the computing of relevant sensors, which are filtered for improving the learning capabilities.
In summary, the proposed methodology for real-time recognition of interleaved activities is focused on three key points:
  • A fuzzy temporal representation of long-term and short-term activations, which define temporal sequences.
  • An ensemble of of activity-based classifiers, which are defined by the suitable sequence classifier: Long Short-Term Memories (LSTM) [19].
  • Balanced learning for each activity-based classifier, to avoid the imbalance problem that suffers daily activity datasets [12,13]. It is optimized by the similarity relation between activities which: (i) determines the adequate samples within the training dataset, based on the similarity with activity to learn; and (ii) filters the relevant sensors to take into account in the learning process.
In Figure 1, we show the scheme of the methodology proposed.

2.1. Representation of Binary Sensors and Activities

A set of binary sensors is represented by S = { S 1 , , S | S | } and a set of daily activities is represented by A = { A 1 , , A | A | } , where | S | and | A | are the number of sensors and daily activities respectively. They are described by a set of binary activations within a set of ranges of time, which are defined by a starting and ending point of time by Equation (1):
S i = { S i 0 , , S | S i | } , S i j = { S i j 0 , S i j + } A i = { A i 0 , , A | A i | } , A i j = { A i j 0 , A i j + }
where (i) | S i | , | A i | is the total number of activations for a given binary sensor S i and daily activity respectively; and (ii) S i j 0 , S i j + is the starting and ending point of a given time of activation.

2.2. Segmentation of Dataset in Time-Slots

We generated a segmented timeline defined by time-slots (also known as time-steps), which indicate the activation of activities and sensors for a given in a time interval of fixed duration Δ t . The range for evaluating each time-slot t i is defined by a sliding window between [ t i , t i + Δ t ] .
For each time-slot and a given sensor we determine its activation based on if it has been activated within it:
S ( t i , s ) = 1 [ S s j 0 , S s j + ] [ t i , t i + Δ t ] S s j 0 otherwise
In a similar way, for define the activation of an activity a in a time-slot t i ,:
S ( t i , a ) = 1 [ A i j 0 , A i j + ] [ t i , t i + Δ t ] A i j 0 otherwise
Each sensor or activity is represented as a set of activation by ordered time-slots S ( s ) = { S ( t 0 , s ) , , S ( t n , s ) } . For sake of simplicity, we call extensively t + to a time-slot t i in the timeline T.

2.3. Sensor Features Defined by Fuzzy Temporal Windows

In this Section, a binary-sensor representation approach based on fuzzy temporal windows (FTWs) is detailed. FTWs are therefore described from a given current time t * to a past point of time t i as a function of the temporal distance Δ t i * = t * t i , t * > t i [20]. For that, a given FTW T k relates the sensor activation S ( s , t i ) in a current time t * to a fuzzy set T k ( Δ t i * ) , which is characterized by a membership function μ T k ˜ ( Δ t i * = t * t i ) . For a given FTW, we can write T k ( Δ t i * ) instead of μ T k ˜ ( Δ t i * ) .
Firstly, for a given FTW T k and the current time t * , each past sensor activation S t i , s is weighted by calculating the degree of time-activation within the fuzzy temporal window T k according to Equation (4).
T k ( s , t * , t i ) = S ( t i , s ) T k ( Δ t i * ) , t i < = t *
Secondly, the degrees of time-activation are aggregated using the t-norm operator in order to obtain a single activation degree of both fuzzy sets S ( s ) T k by Equation (5).
T k ( s , t * ) = S ( s ) T k ( Δ t * ) = t i ¯ T S ( t i , s ) T k ( Δ t i * ) , t i < = t *
We propose using the maximal and minimal operators as t-norms, which are recommended for representing binary sensors [21].
T k ( s , t * ) = S ( s ) T k ( Δ t * ) = m a x ( m i n ( S ( t i , s ) , T k ( Δ t i * ) ) , t i T , t i < = t *

2.4. Sequence Features of FTW

The representation of sensor activation based on FTWs is used to define a sequence for the purposes of classification. It has been proposed that FTWs of incremental temporal sizes are defined, to collect the long-term to the short-term temporal activations.
Each TFW T k is described by a trapezoidal function based on the time interval from a previous time t i to the current time t * : T k ( Δ t i * ) [ l 1 , l 2 , l 3 , l 4 ] is described by a fuzzy set characterized by a membership function whose shape corresponds to a trapezoidal function. The well-known trapezoidal membership functions are defined by a lower limit l 1 , an upper limit l 4 , a lower support limit l 2 , and an upper support limit l 3 (refer to Equation (7)):
T S ( x ) [ l 1 , l 2 , l 3 , l 4 ] = 0 x l 1 ( x l 1 ) / ( l 2 l 1 ) l 1 < x < l 2 1 l 2 x l 3 ( l 4 x ) / ( l 4 l 3 ) l 3 < x < l 4 0 l 4 x
To generate FTWs in a simple manner, we propose to define them from a set of incremental ordered times of evaluation L = { L 1 , , L | L | } , L i 1 < L i , which the limits of the trapezoidal functions are calculated regarding to the index of the temporal window T k .
T k = T k ( Δ t i * ) [ L k , L k 1 , L k 2 , L k 3 ]
In order to define the FTWs, we purpose incremental FTWs straightforwardly defined by the Fibonacci sequence [22] L = { 1 , 2 , 3 , 5 , 8 , } · Δ t , whose example is shown in Figure 2.
So, L generates a feature vector: (i) which is composed of components the T k ( s , t + ) for each time-slot in the timeline t + and a given sensor s; and (ii) whose size is equal to the number of TFWs times the number of sensors | T | × | S | :
T ( s , t + ) = { T 0 ( s , t + ) T k ( s , t + ) T | T | ( s , t + ) }

2.5. Ensemble of Classifiers for Activities

Each LSTM activity-based classifier is focused on learning a given activity A i by means of a balanced training dataset. Therefore, each classifier learns two class problems: the target activity A i and not-being the target activity A i ¯ , which represents other classes and idle class.
For each time-slot t + and a given classifier A i the target class O ( t + ) is defined by:
O ( t * ) = 1 S ( A i , t + ) = = 1 0 S ( A i , t + ) 1
So, O ( t + ) represents the target class to learn by each classifier, whose activation can be concurrent with several activities A i , A j , t + : S ( A i , t + ) = S ( A j , t + ) = 1 , A j A i .
The feature vector for this time-slot t + is formed by the sequence of aggregated activation degrees T k ( s , t + ) from the FTWs T k for each sensor s for a given time-slot t + , as we described in Section 2.4.
Once the learning process is complete, the activation of the target activity A i is presented when the prediction for the target activity p A i overcomes the prediction of not-being the target activity p A i ¯ . We note several classifiers within the ensemble which can (and must) be activated in same time-slot t + .

2.5.1. Balancing Learning With Similarity Relation Between Activities and Filtering of Relevant Sensors

In this section, we describe how to build ad-hoc balanced training for each activity-based classifier from the similarity relation between other activities and filter the relevant sensors while the learning process.
Based on a given activity A i and other activity A j , we define a similarity relation R a as a function R a : A i × A j [ 0 , 1 ] , which determines the similarity degree between both activities.
To compute the similarity, we calculate a similarity relation R s : A i × S j [ 0 , 1 ] between activities and sensors using the relative frequency of sensor activation within each activity:
R s ( A i , S j ) = | S j A i | S k S | S j A i |
where | S j A i | represents the number of time-slots activated when the sensor S j is activated together with the activity A i . This measure is also called Mutual Information [6].
First, the similarity R s ( A i , S j ) is used to compute the relevant sensors S j + for a given activity A i based on a relevance factor s α :
S j + : R s ( A i , S j ) > s α
Secondly, we evaluate the similarity relation between activities R a aggregating the similarity relation between their sensors:
R a ( A i , A j ) = S k S R a ( A i , S k ) × R a ( A j , S k ) , A i A j
Thirdly, we propose to build a balanced-activity training dataset, which contains a weight or percentage of samples for each activity A i based on the similarity relation:
  • w A i , defines a fixed percentage of samples corresponding to the activity to learn.
  • w A 0 , defines a fixed percentage of samples corresponding to any activity (Idle).
  • w A i ¯ , configures a dynamic percentage from the all other activities in the balanced-activity training dataset w A i + w A 0 + w A i ¯ = 1 , which is calculated by weighting the normalized similarity degree with the percentage from the other activities:
    w A j = w A i ¯ × R a ˜ ( A i , A j )
In order to re-sample the time-slots for each balanced-activity training dataset, a straightforward random process is included to select a random time-slots rejecting or accepting based on the percentages of activities w A i .

3. Experimental Setup

In this Section, the experiments performed on the proposed methodology is evaluated using the interleaved dataset [7], which provides data from 20 participants who performed eight activities in any order, interweaving and performing tasks in parallel if desired. Up to three activities can be performed concurrently by the participant. We note the complexity of learning in this extreme problem.
The activities developed were (1) Filling a medication dispenser; (2) Watching a DVD; (3) Watering plants; (4) Answering the phone; (5) Preparing a birthday card; (6) Preparing soup; (7) Cleaning; and (8) Choosing an outfit. 41 sensors, including motion, item and water sensors, describe the activities within WSU smart apartment testbed.
The methodology proposed in this work uses following parameters:
  • Number of FTWs= | T | = 10 .
  • Incremental FTWs defined by the Fibonacci sequence [22] L = { 720 , 540 , 360 , 180 , 60 , 30 , 8 , 5 , 3 , 2 , 1 } · Δ t .
  • For balancing training dataset for each activity:
    Number of training samples = 5000.
    Percentage of samples from target activity w A i = 0.4 .
    Percentage of idle activity w A 0 = 0.1 .
    Percentage of samples corresponding to the non-target activity w A i ¯ = 0.6 .
  • For each LSTM activity-based classifier: learning rate = 0.003, number of neurons = 64, number of layers = 3.
We evaluated three different time intervals of fixed duration Δ t to define the time-slots Δ t = { 30 s , 60 s , 90 s } and three different values for the relevance factor; to identify the selected sensors for learning in each activity s α = { 0 % , 3 % , 6 % } . For this, two metrics are introduced:
  • F1-coverage (F1-sc), which provides an insight into the balance between precision ( p r e c i s i o n = T P T P + F P ), and recall ( r e c a l l = T P T P + F N ) from predicted and ground truth time-slots. Although well-known in activity recognition [23], we note a key issue from this metric on time interval analysis: the false positives of an activity, far from any time interval activation, are equally computed to false positives closer to end of activities. Which is common in the end of activities more so than in interleaved activities.
  • F1-interval-intersection (F1-ii), evaluates the time intervals of each activity based on: (i) the precision of predicted time intervals; which intersects to a ground truth time interval; (ii) the recall of the ground truth time intervals; which intersects with a predicted time interval.
For evaluation purposes we have developed a leave-one-participant cross-validation, where for each participant, the test is composed by the activities performed by the given participant and training is composed by the activities performed by other participants.

4. Results

In this section we describe the results of F1-sc and F1-ii from the interleaved dataset [7]. Table 1 describes the metrics for the duration of time-slots Δ t = { 30 s , 60 s , 90 s } and the the relevance factor s α = { 0 % , 3 % , 6 % } for each configuration of values. Due to the recognition of the activities being frequently adjacent to the ground truth of the activity, included also is: (i) a strict comparison without error margin 0-time-slot, and (ii) a time-slot margin, comparing the prediction and ground truth; which evaluates as correct if both match in the adjacent time-slot. Finally, in Table 2 and Table 3 we detail the values of metrics for each activity and the best configuration of the relevance factor s α .

4.1. Discussion

From the results previously described, the suitable performance for the challenging problem presented in the interleaved dataset [7] is highlighted. The evaluation based on leave-one-participant cross-validation presents a hard comparative, due to each participant having the opportunity to carry out activities in any order, thus introducing unseen and unlearned habits within the activity learning process.
It is noted that the relevant detection of activity intervals, is presented by F1-ii) close or up to 90 % for the three duration of time-slots Δ t = { 30 s , 60 s , 90 s } . Patently, a higher aggregation of time-slots with duration Δ t = 90 s increases the performance, but at the cost of reducing the evaluation time, which present three times less responses than Δ t = 30 s . For same reason, the difference in error margin between 0-time-slot and 1-time-slot is more relevant with Δ t = 90 s .
Furthermore, the use of a relevance factor s α , which identifies and selects the key sensors for each activity while learning, has increased the accuracy rate. In higher duration of time-slots Δ t = { 90 s , 60 s } the filtering by the relevance factor is noteworthy due to a greater number of sensors being activated concurrently in the same time-slots. It increases the noise in the feature representation of sensors, but a filtering of relevant sensors aims to reduce the conflicting activations in time-slots.
In [17] the approach presents results based on sensor events without real-time capabilities of 81 % , where the classification was evaluated only when a change in a sensor was detected. Due to this, our work is based on the evaluation of time-slots in the further timeline; a direct evaluation is not possible. The coverage of time-slots F1-sc presents an excellent prediction in real-time close to 75 % , which is remarkable due to any external information and modelling being required previously, as well as, the requirements of evaluating in real-time each time-slot without introducing future information of sensor.

5. Conclusions and Ongoing Works

The use of fuzzy temporal representation on binary sensors, learned by an ensemble of Long Short-Term Memory, have been demonstrated as an encouraging methodology to recognize interleaved activities in real-time. The use of multiple FTWs enables a flexible temporal evaluation in interleaved activities, whose duration is strongly variant. Moreover, the Fibonacci sequence represents a suitable shape of incremental FTWs to avoid the hard selection in temporal segmentation.
The results show an encouraging real-time recognition of activity intervals, which represent the intersection of recognition intervals in the ground truth interval, as f1-ii = 90 % . The coverage of predicted time-slots in real-time within the activity intervals is f1-sc = 75 % .
In ongoing works, we will translate the proposed methodology in multi-occupancy and interleaved activities represented by recent devices, such as, wearable and vision sensors, which provide a challenging problem to be solved. To evaluate these non-binary sensors in long and middle-term, it will be necessary to extract several temporal features from signals including a filter to remove those non-relevant. For that, the performance of human defined features versus deep learning approaches will be also compared.

Author Contributions

Conceptualization, J.M.; Methodology and Investigation, J.M., C.O., S.Z., C.N., A.S. and M.E.; Funding Acquisition, C.N. and M.E.

Acknowledgments

This research has received funding under the REMIND project Marie Sklodowska-Curie EU Framework for Research and Innovation Horizon 2020, under Grant Agreement No. 734355. The authors would also like to thank the Connected Health Innovation Centre (CHIC). Also, this contribution has been supported by the project PI-0203- 2016 from the Council of Health for the Andalucian Health Service, Spain together the research project TIN2015-66524-P from the Spanish government.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FTWFuzzy Temporal Window
LSTMLong short-term memory

References

  1. Kon, B.; Lam, A.; Chan, J. Evolution of Smart Homes for the Elderly. In Proceedings of the 26th International Conference on World Wide Web Companion, Perth, Australia, 3–7 April 2017; pp. 1095–1101. [Google Scholar]
  2. Bilodeau, J.; Fortin-Simard, D.; Gaboury, S.; Bouchard, B.; Bouzouane, A. Assistance in Smart Homes: Combining Passive RFID Localization and Load Signatures of Electrical Devices. In Proceedings of the 2014 IEEE International Conference on Bioinformatics and Biomedicine, Belfast, UK, 2–5 November 2014; pp. 19–26. [Google Scholar]
  3. Ordonez, F.; Roggen, D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115–139. [Google Scholar] [CrossRef] [PubMed]
  4. Okeyo, G.; Chen, L.; Wang, H.; Sterritt, R. Dynamic sensor data segmentation for real-time knowledge-driven activity recognition. Pervasive Mob. Comput. 2014, 10, 115–172. [Google Scholar] [CrossRef]
  5. Orr, C.; Nugent, C.; Wang, H.; Zheng, H. A Multi-Agent Approach to Facilitate the Identification of Interleaved Activities. DH’18. In Proceedings of the 2018 International Conference on Digital Health, Dublin, Ireland, 14–15 June 2018; pp. 126–130. [Google Scholar]
  6. Krishnan, N.C.; Cook, D.J. Activity recognition on streaming sensor data. Pervasive Mob. Comput. 2014, 10, 138–154. [Google Scholar] [CrossRef] [PubMed]
  7. Singla, G.; Cook, D.J.; Schmitter-Edgecombe, M. Tracking activities in complex settings using smart environment technologies. Int. J. Biosci. Psychiatry Technol. 2009, 1, 25. [Google Scholar]
  8. Yan, S.; Liao, Y.; Feng, X.; Liu, Y. Real time activity recognition on streaming sensor data for smart environments. In Proceedings of the International Conference on Progress in Informatics and Computing (PIC), Shanghai, China, 23–25 December 2016; pp. 51–55. [Google Scholar]
  9. Cicirelli, F.; Fortino, G.; Giordano, A.; Guerrieri, A.; Spezzano, G.; Vinci, A. On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment. J. Med. Syst. 2016, 40, 200. [Google Scholar] [CrossRef] [PubMed]
  10. Banos, O.; Galvez, J.M.; Damas, M.; Pomares, H.; Rojas, I. Window size impact in human activity recognition. Sensors 2014, 14, 6474–6499. [Google Scholar] [CrossRef] [PubMed]
  11. Espinilla, M.; Medina, J.; Hallberg, J.; Nugent, C. A new approach based on temporal sub-windows for online sensor-based activity recognition. J. Ambient Intell. Hum. Comput. 2018, 1–13. [Google Scholar] [CrossRef]
  12. Ordonez, F.J.; de Toledo, P.; Sanchis, A. Activity recognition using hybrid generative discriminative models on home environments using binary sensors. Sensors 2013, 13, 5460–5477. [Google Scholar] [CrossRef] [PubMed]
  13. Van Kasteren, T.; Noulas, A.; Englebienne, G.; Krose, B. Accurate activity recognition in a home setting. In Proceedings of the 10th International Conference on Ubiquitous Computing, Seoul, Korea, 21–24 September 2008; pp. 1–9. [Google Scholar]
  14. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intel. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  15. Fortin-Simard, D.; Bilodeau, J.; Gaboury, S.; Bouchard, B.; Bouzouane, A. Human Activity Recognition in Smart Homes: Combining Passive RFID and Load Signatures of Electrical Devices. In Proceedings of the IEEE Symposium on Intelligent Agents (IA), Orlando, FL, USA, 9–12 December 2014; pp. 22–29. [Google Scholar]
  16. Ye, C.; Sun, Y.; Wang, S.; Yan, H.; Mehmood, R. ERER: An event-driven approach for real-time activity recognition. In Proceedings of the 2015 International Conference on Identification, Information, and Knowledge in the Internet of Things, Beijing, China, 22–23 October 2015; pp. 288–293. [Google Scholar]
  17. Riboni, D.; Sztyler, T.; Civitarese, G.; Stuckenschmidt, H. Unsupervised recognition of interleaved activities of daily living through ontological and probabilistic reasoning. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 1–12. [Google Scholar]
  18. Medina, J.; Zhang, S.; Nugent, C.; Espinilla, M. Ensemble classifier of Long Short-Term Memory with Fuzzy Temporal Windows on binary sensors for Activity Recognition. Expert Systems with Applications. Expert Syst. Appl. 2018, 114, 441–453. [Google Scholar] [CrossRef]
  19. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  20. Medina, J.; Espinilla, M.; Nugent, C. Real-time fuzzy linguistic analysis of anomalies from medical monitoring devices on data streams. In Proceedings of the 10th EAI International Conference on Pervasive Computing Technologies for Healthcare, Cancun, Mexico, 16–19 May 2016; ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering): Gent, Belgium, 2016; pp. 300–303. [Google Scholar]
  21. Medina, J.; Espinilla, M.; Zafra, D.; Martínez, L.; Nugent, C. Fuzzy Fog Computing: A Linguistic Approach for Knowledge Inference in Wearable Devices. In Proceedings of the International Conference on Ubiquitous Computing and Ambient Intelligence, Philadelphia, PA, USA, 7–10 November 2017; Springer: Cham, Switzerland, 2017; pp. 473–485. [Google Scholar]
  22. Stakhov, A. The golden section and modern harmony mathematics. In Applications of Fibonacci Numbers; Springer: Dordrecht, The Netherlands, 1998; pp. 393–399. [Google Scholar]
  23. Van Kasteren, T.L.M.; Englebienne, G.; Krose, B.J. An activity monitoring system for elderly care using generative and discriminative models. Pers. Ubiquitous Comput. 2010, 14, 489–498. [Google Scholar] [CrossRef]
Figure 1. Scheme of ensemble of classifiers and balanced training for interleaved activity recognition. A i and A i + 1 represent different activities which can be activated in an interleaved way.
Figure 1. Scheme of ensemble of classifiers and balanced training for interleaved activity recognition. A i and A i + 1 represent different activities which can be activated in an interleaved way.
Proceedings 02 01225 g001
Figure 2. An example of incremental size for FTWs is straightforwardly defined by Fibonacci sequence.
Figure 2. An example of incremental size for FTWs is straightforwardly defined by Fibonacci sequence.
Proceedings 02 01225 g002
Table 1. Global results.
Table 1. Global results.
Δ t F1-iiF1-sc
s α = 0 % s α = 3 % s α = 6 % s α = 0 % s α = 3 % s α = 6 %
0 slot margin30s84.5480.7579.1370.9570.7866.98
60s85.9685.8786.3573.0373.8972.52
90s81.5087.6090.4768.5275.9975.48
1 slot margin30s88.3085.4687.3473.9772.9169.56
60s88.0989.2691.0574.3177.1674.77
90s84.2391.9695.6170.8677.9477.58
Table 2. Detailed values of metric F1-ii for each activity, duration of time-slot Δ t and relevance factor s α .
Table 2. Detailed values of metric F1-ii for each activity, duration of time-slot Δ t and relevance factor s α .
Δ t = 30 s Δ t = 60 s Δ t = 90 s
A i s α = 0 % s α = 3 % s α = 6 % s α = 0 % s α = 3 % s α = 6 % s α = 0 % s α = 3 % s α = 6 %
t190.0686.1792.8391.1694.8798.2991.1992.43100.00
t291.3584.2690.4085.6987.3790.4185.6193.3993.37
t382.5885.0588.9586.2190.8790.2378.2993.7491.04
t482.0071.0571.6781.2877.7475.6575.6586.0486.72
t591.7889.0789.9295.0895.8595.0787.0791.8496.58
t688.9192.3283.0290.8496.2795.6290.1893.93100.00
t793.3085.9189.2685.2887.6489.5085.1590.3698.15
t886.4589.8392.7289.1983.4493.5980.6894.0098.99
t988.3085.4687.3488.0989.2691.0584.2391.9695.61
Table 3. Detailed values of metric F1-sc for each activity, duration of time-slot Δ t and relevance factor s α .
Table 3. Detailed values of metric F1-sc for each activity, duration of time-slot Δ t and relevance factor s α .
Δ t = 30 s Δ t = 60 s Δ t = 90 s
A i s α = 0 % s α = 3 % s α = 6 % s α = 0 % s α = 3 % s α = 6 % s α = 0 % s α = 3 % s α = 6 %
t181.5076.6477.8280.8784.6080.9475.1781.1281.40
t276.7467.8864.5573.9975.7274.5675.8576.8972.87
t365.4368.0366.2867.8577.2170.7568.9480.1179.23
t463.0750.4950.2060.4761.5646.1558.4261.7346.15
t585.9688.7385.9785.5186.0184.2376.9986.9087.48
t670.6775.6358.0375.2880.4174.6067.1173.9976.96
t777.4072.3168.6675.4975.8476.5576.2175.6783.51
t870.9983.5684.9475.0375.9390.3968.1587.1392.99
t973.9772.9169.5674.3177.1674.7770.8677.9477.58
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Quero, J.M.; Orr, C.; Zang, S.; Nugent, C.; Salguero, A.; Espinilla, M. Real-time Recognition of Interleaved Activities Based on Ensemble Classifier of Long Short-Term Memory with Fuzzy Temporal Windows. Proceedings 2018, 2, 1225. https://doi.org/10.3390/proceedings2191225

AMA Style

Quero JM, Orr C, Zang S, Nugent C, Salguero A, Espinilla M. Real-time Recognition of Interleaved Activities Based on Ensemble Classifier of Long Short-Term Memory with Fuzzy Temporal Windows. Proceedings. 2018; 2(19):1225. https://doi.org/10.3390/proceedings2191225

Chicago/Turabian Style

Quero, Javier Medina, Claire Orr, Shuai Zang, Chris Nugent, Alberto Salguero, and Macarena Espinilla. 2018. "Real-time Recognition of Interleaved Activities Based on Ensemble Classifier of Long Short-Term Memory with Fuzzy Temporal Windows" Proceedings 2, no. 19: 1225. https://doi.org/10.3390/proceedings2191225

APA Style

Quero, J. M., Orr, C., Zang, S., Nugent, C., Salguero, A., & Espinilla, M. (2018). Real-time Recognition of Interleaved Activities Based on Ensemble Classifier of Long Short-Term Memory with Fuzzy Temporal Windows. Proceedings, 2(19), 1225. https://doi.org/10.3390/proceedings2191225

Article Metrics

Back to TopTop