Next Article in Journal
Comparison of Spectrum Estimation Methods for the Accurate Evaluation of Sea State Parameters
Next Article in Special Issue
Love Wave Sensors with Silver Modified Polypyrrole Nanoparticles for VOCs Monitoring
Previous Article in Journal
Coherent Markov Random Field-Based Unreliable DSM Areas Segmentation and Hierarchical Adaptive Surface Fitting for InSAR DEM Reconstruction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Invisible Sensors and a Machine-Learning-Based Recognition System Used for Early Prediction of Discontinuous Bed-Leaving Behavior Patterns †

1
Faculty of Systems Science and Technology, Akita Prefectural University, Yurihonjo City, Akita 015-0055, Japan
2
Faculty of Engineering, Yamaguchi University, Ube City, Yamaguchi 755-8611, Japan
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Madokoro, H.; Nakasho, K.; Shimoi, N.; Woo, H.; Sato, K. Invisible Sensors for Early Prediction of Discontinuous Bed-Leaving Behavior Patterns Proceedings of the 5th International Conference on Sensors Engineering and Electronics Instrumentation Advances, Tenerife, Spain, 25–27 September 2019.
These authors contributed equally to this work.
Sensors 2020, 20(5), 1415; https://doi.org/10.3390/s20051415
Submission received: 18 January 2020 / Revised: 23 February 2020 / Accepted: 3 March 2020 / Published: 5 March 2020

Abstract

:
This paper presents a novel bed-leaving sensor system for real-time recognition of bed-leaving behavior patterns. The proposed system comprises five pad sensors installed on a bed, a rail sensor inserted in a safety rail, and a behavior pattern recognizer based on machine learning. The linear characteristic between loads and output was obtained from a load test to evaluate sensor output characteristics. Moreover, the output values change linearly concomitantly with speed to attain the sensor with the equivalent load. We obtained benchmark datasets of continuous and discontinuous behavior patterns from ten subjects. Recognition targets using our sensor prototype and their monitoring system comprise five behavior patterns: sleeping, longitudinal sitting, lateral sitting, terminal sitting, and leaving the bed. We compared machine learning algorithms of five types to recognize five behavior patterns. The experimentally obtained results revealed that the proposed sensor system improved recognition accuracy for both datasets. Moreover, we achieved improved recognition accuracy after integration of learning datasets as a general discriminator.

1. Introduction

Aging in Japan has been progressing rapidly not only because of an increasing number of elderly people and their longevity, but also a decreasing number of young people caused by a declining birthrate. Although the demand for nursing-care services has been growing along with the continuously developing aging society, the supply is insufficient because of changing demographics [1]. As an occupational characteristic of caregivers, the occupation and turnover rates are both high compared to those of other industries [2]. Caregivers must work not only to provide various care services physically and mentally, but also for shifting during nighttime for providing 24-hour nursing services and support. Especially for nighttime, a severe caregiver shortage leads to insufficient nursing-care, which involves a risk of inducing accidents related to the daily life for care recipients.
Mitadera et al. [3] reported that fall accidents of elderly people accounted for more than 50% of all accidents at nursing-care facilities. Situational details reveal that most accidents occurred when elderly people left their bed and its surroundings. Moreover, 85.5% of fall-related accidents occurred under circumstances without assistance or supervision. Therefore, preventive measures using bed-leaving sensors are indispensable for detecting bed-leaving behavior in an early stage because facility administrators are charged with management responsibility if an accident occurs.
Recently, various bed-leaving sensors are commercially available from manufacturers. For example, clip sensors, mat sensors, infrared (IR) sensors are widely used at hospitals and nursing-care facilities. Although clip sensors are used easily because they are the most reasonable means available, care recipients are restrained by sensor wires because they are attached directly to the patient’s nightwear. Moreover, a risk exists that a sensor wire might wrap around the neck of a care recipient. Therefore, the use of clip sensors has been discouraged recently. Mat sensors, which are inexpensive even compared with clip sensors, are used widely at clinical sites because they entail no restraint. One shortcoming of mat sensors is their slow detection and response from the position where a care recipient sits at the bed terminal while putting their feet on it. Another shortcoming is the excessive reaction even when a caregiver or a family member passes through while stepping on it. Furthermore, a care recipient might attempt to leave while consciously avoiding stepping on a mat sensor because it is a visible sensor. IR sensors present similar shortcomings to those of mat sensors. Moreover, caregivers must check the sensor installation status because care recipients touch them occasionally.
To prevent fall accidents, sensor systems with bed-leaving behavior predictions at an early stage have been studied. Asano et al. [4] proposed a detection system using a depth camera. They employed support vector machines (SVMs) to recognize bed-leaving behavior patterns after optimizing parameters and motion features combined with the body size, position, and orientation of respective subjects. Their experimentally obtained result achieved 92.65% recall after 68 iterations. However, the precision was insufficient for practical application because false detection occurred in within 24 iterations. Moreover, they used a depth camera for capturing images. Although it is difficult to identify profiles solely from depth images, a challenging task remains: eliminating unpleasantness felt by a patient who is monitored by a camera.
Kawamura et al. [5] proposed a wearable sensor system using a three-axis accelerometer. For their system, unrestrained measurements are actualized using a lightweight sensor module of 13 g. With consideration of clinical applications, they obtained not only metaparameters that contributed to recognition, but also experimentally obtained results with bed-up and wheelchair locomotion. No recognition accuracy was provided as detailed sensor characteristics. Moreover, they provided neither recognition accuracy nor detailed sensor characteristics. Existing bed-leaving sensors entail persistent difficulties related to quality of life (QoL), detection speed, convenience, and cost. No sensors that satisfy these requirements have been put to practical use. In modern society, the declining birthrate and aging population are progressing rapidly. We therefore regard development of sensor systems that overcome these problems as an urgent task to.
This study was conducted to develop a bed-leaving recognition sensor system that is inexpensive, convenient, and maintainable with advanced QoL for care recipients. For improving recognition accuracy and reliability compared with our earlier bed-leaving sensor system [6], our novel sensor prototype comprises pad sensors and a rail sensor installed respectively on a bed frame and a bed-side safety rail. For evaluating our sensor system, we obtained original benchmark datasets of two types: continuous datasets with behavior transitions from sleeping to bed leaving in a predefined interval and discontinuous datasets with free and random movements obtained from 10 subjects. We compared machine learning algorithms of five types to recognize five behavior patterns. Our earlier study [6] provided respective classifiers because bed-leaving behavior patterns have unique characteristics along with their subjects. Nevertheless, for this learning strategy, the recognition accuracy was insufficient to ensure reliability for discontinuous datasets. Therefore, we developed a single classifier using all continuous datasets. The experimentally obtained results revealed that the proposed sensor system improved recognition accuracy for both datasets.
The rest of the paper is structured as follows. In Section 2, our originally developed sensors of two types and their measurement system are presented. Section 3 and Section 4 present our proposed method based on machine learning algorithms of five types and our original datasets obtained from ten subjects, respectively. Subsequently, Section 5 presents evaluation results with recognition accuracies and confusion matrixes in respective datasets, respectively. Finally, Section 6 concludes and highlights future work. Herein, we had proposed this basic method with originally developed sensors of two types in the proceeding [7]. Moreover, we had presented basic characteristics of pad sensors in the proceeding [8]. For this paper, we have described detail results and discussion in Section 5.

2. Sensor System

2.1. System Structure

Figure 1 depicts the whole system structure of our novel sensor prototype, which comprises pad sensors, a rail sensor, sensor boards, a wireless router, and a monitoring computer. Output signals are collected to the sensor boards with a wired connection. The sensor boards convert analog signals to digital signals in real time. Digital signals are sent to a monitoring computer with wireless connection. We used ZigBee, a short distance wireless communication protocol, to provide cost-effective implementation and low power consumption. The transmitted measurement signals are displayed on the monitoring computer in real time. Behavior recognition algorithms based on machine learning are incorporated in the monitoring computer.

2.2. Pad Sensor

Figure 2 depicts our originally developed pad sensor prototype. The pad sensors are non-restrictive, invisible, cost effective, and independent of sensor driven power. We used a piezoelectric film, which generates a potential difference when distortion occurs in an arbitrary direction against external forces such as receiving vibration. A commercial piezoelectric film (DT2-028K/L; Tokyo Sensor Co., Ltd.) is sandwiched by 1-mm-thick urethane sheets with 50 deg hardness. The durability and elasticity of the piezoelectric film are improved using urethane sheets. Moreover, polyethylene terephthalate (PET) boards with a larger size than the urethane sheets provide extended sensing ranges. The sensor core is protected using PET plates that have 200 mm diameter and 0.5 mm thickness. We used ultraviolet stiffened resin to adhere to the piezoelectric film and the urethane sheets.

2.3. Rail Sensor

Care recipients whose legs and body are weak sometimes grip a safety rail beside a bed when they try to stand up. Motegi et al. [9] reported that approximately 82% of care recipients gripped a safety rail when they left from their bed. Therefore, we specifically examined a bed side safety rail for the prevention of a fall. We consider that the recognition accuracy is improved if gripping of a safety rail is detected using a dedicated sensor.
Figure 3 depicts our originally developed rail sensor prototype. We inserted a piezoelectric film, DT2-028K/L, into a silicon tube with 50 mm length, 10 mm outer diameter, and 5 mm inner diameter. As a stopper and a protector, a metal cap is stuffed to the sensor upside.

2.4. Basic Characteristics

For evaluating characteristics of our developed film-load sensors, we conducted preliminary experiments using the load test machine (Multi Force Analyzer FWT-1000; DigiTech Co. Ltd.), as depicted in Figure 4a. The major specifications of the machine are: 1 kN rated weight; 100 mN resolution; 600 mm/min maximum test speed; and ±0.2% weight precision. The majority of loads are attained from the vertical side as a surface load because of the installation of the sensors on the bed frames. For this load test, we developed a fixture made of A2017 duralumin, as depicted in Figure 4b. The major specifications of the fixture are 100 × 100 mm with 15 mm basement thickness and 70 × 50 mm with 5 mm top thickness.
The load reaches the maximum for longitudinal sitting, which raises the upper body of a person to start leaving the bed. From the report [10] of human sciences of nursing, the body weight to the hip at longitudinal sitting is approximately equal to the total weight of the upper body. It is 87% of the total body weight. According to the National Healthcare and Nutrition Report 2014 in Japan [11], the mean weights of people older than 65 years old people are 61.9 kg for men and 50.8 kg for women. Based on both mean weights, we set test loads from 340 N to 680 N for five sampling points.
We evaluated the output characteristics of our developed sensors of five sets with the default test speed of 5 mm/min. Figure 4c depicts a schematic diagram of the sensor output that occurs from the range except that of the rivet parts. For attaining a load, the sensor is fixed to the removal part of 10 mm from the boundaries. We measured output voltages from respective sensors using a data logger (LR8431; Hioki Corp.) concomitantly with the test load.
Figure 5 depicts the output characteristics of five sensors. The vertical and horizontal axes respectively depict output voltage and test loads. The output voltage increases concomitantly with the load until the peak for the maximum load. Subsequently, reverse voltage appears during a slight time as a steady state for removing the load cell from the device. We calculated output characteristics of our prototype sensors using this peak voltage obtained from this test. The results address the linear relation between the sensors and the test load patterns, although the gradients differ among sensors. We consider that the output voltage increases concomitantly with the weight of a person.
Figure 6 presents results of changing test speeds from 1 mm/min to 8 mm/min step by 1 mm/min. The output voltage increases concomitantly with speed changes that are similar characteristics that resemble the load-test results presented above. Figure 7 depicts characteristics of the side and orientation of the sensors. We evaluated four patterns: a top/longitudinal side, a bottom/longitudinal side, a top/lateral side, and a bottom/lateral side. The top and bottom sides are defined by rivets of the piezoelectric film. The results are depicted in Figure 7. The output voltage of the longitudinal side is 3.12 times higher than that of the lateral side. This directivity characteristic is reflected in the sensor installation to the bed-leaving direction.

2.5. Sensor Installation

To sense the distributed weight of a sleeping body on a bed, pad pressure sensors are installed to five areas between a mattress and a bed frame. Figure 8 depicts the sensor configuration on a bed. The approximate measurement ranges of each sensor are the upper body for channels 1 and 2 (CH1 and CH2), legs for the channels 3 and 4 (CH3 and CH4), and the hip for channel 5 (CH5). A rail sensor is installed to a safety bed rail of one side. Figure 9 depicts photographs of installed sensors. Compared with the pre-installed sensor bed [12], our sensors can be installed on various beds as a post-installation system. Figure 10 depicts a sensor measurement board with a ZigBee module for wireless communication. Sensor signals are transmitted to a monitoring computer via the board. Our system provides easy and simple monitoring using only six sensors of two types and two measurement boards.

3. Bed-Leaving Behavior Pattern Recognition Based on Machine-Learning Algorithms

3.1. Feature Calculation

Sensor signals are captured with 50 Hz as the default sampling rate for the measurement board, as depicted in Figure 10. Using all features calculated from all obtained sensor signals gives rise to a significant increase of calculation costs. For reducing the total data size, sensor signals are downsampled to 10 Hz. Moreover, signal changes are calculated at 1 s intervals for enhancing features. Let D ( t ) be the summation of signal changes at time t. The absolute difference Δ y t of momental output values is calculated between t 1 and t. For summarizing Δ y during 1 s as n = 10 in 10 Hz, D ( t ) are calculated as shown below.
D ( t ) = k = 1 n Δ y k .
Figure 11 depicts the outline procedure of feature calculation. Herein, the reason that we used the summation of signal changes for 1 s is because of the output property of piezoelectric elements. Actually, piezoelectric elements have no output voltage with no input force. Output voltage occurs if a dynamic load is attended. After bending, the output voltage returns to 0 V again as a stationary load. For pattern recognition based on machine learning, the status duration with sufficient intervals is desirable for suitable features as correct recognition. Therefore, we infer that false recognition is reduced by the summation of signal changes.
Subsequently, we normalized features for unifying the scale. Let X i , X ¯ , and s respectively represent input features, mean features, and standard deviation of features. As normalized D i , Z i is calculated as
Z i = X i X ¯ s .

3.2. Recognition Algorithms

The aim of this study is to provide a sensor system without setting body parameters as a subject’s profile in advance. We use machine-learning algorithms as a robust approach to absorbing individual differences. Actually, various machine-learning algorithms are present with the feature of easy implementation. For our earlier study [6], we used counter propagation networks (CPNs) because of the advantages of visualizing input feature topology on a category map. For this study, we compared various machine-learning algorithms to recognize behavior patterns for developing a reliable sensor system.
As a comparison target, we selected four machine-learning algorithms: a naive Bayes classifier (NB) [13], k-nearest neighbor (kNN) [14], SVMs [15], and random forests (RF) [16]. These algorithms achieve advanced precision recognition with a small data amount. For our earlier study [6], we constructed recognizers for each subject to learn individual differences of bed-leaving behavior patterns. Each recognizer is optimized in each with limited datasets. In contrast, an insufficient data amount against diverse behavior patterns gives rise to dropped recognition accuracy. Therefore, we attempt to construct a recognizer using datasets of all subjects. Herein, we used the scikit-learn machine-learning library [17] for implementation. The following are outlines for the respective algorithms.

3.2.1. NB

Based on Bayesian theory [18], NB employs supervised learning with the assumption that each feature vector is independent. Let y and x n respectively represent class labels and feature vectors. Based on Bayesian theory [18], the following probability is derived.
P ( y | x n ) = ( P ( y ) P ( x n | y ) ) P ( x n )
Herein, the joint probability of each feature vector, which is assumed independence, is expressed by the product of the respective probabilities.
P ( x n | y ) = i = 1 n P ( x i | y )
Therefore, P ( y | x n ) is defined as shown below.
P ( y | x n ) = P ( y ) i = 1 n P ( x i | y ) P ( x n ) P ( y ) i = 1 n P ( x i | y )
Let y ^ be a final estimated class label. y ^ is derived from the maximum probability as
y ^ = argmax y = P ( y ) i = 1 n P ( x i | y ) .

3.2.2. kNN

As a simple supervised learning algorithm, kNN initially plots all features in learning signals on a vector space. Subsequently, k sets of input signals are acquired along with the order of near distance from unknown signal sets. Finally, class labels of unknown signal sets are estimated using a majority voting strategy. Herein, Euclidean distance is used instead of Manhattan distance.

3.2.3. SVM

As a classifier based on supervised learning, SVMs extract a boundary where a margin between two classes has the maximum distance from input signals. A set of distributed signals that is impossible for linear separation is mapped to a high dimensional space using a kernel function for actualizing linear separation. Multi-class features are classified using multiple SVMs with basic two-class classification. For this study, we used two SVM kernel types: linear SVMs (LSVMs) and radial basis function SVMs (RBF-SVMs).

3.2.4. RF

For improving generalization capability, RF comprises ensemble learning algorithms. Initially, weak classifiers are generated from decision trees. Subsequently, decision trees estimate a class label through tracing conditional branches in order from a parent node. Finally, class labels are decided with majority voting from results estimated from respective decision trees.

3.2.5. CPN

CPNs are supervised neural networks that extend from self-organizing maps (SOMs) [19] as unsupervised neural networks. Data topologies are preserved with the competition and neighborhood learning strategy. The CPN network architecture comprises three layers: an input layer, a mapping layer, and a Grossberg layer.
These weights are initialized randomly. Subsequently, a unit on the mapping layer that minimizes the distance calculated from the Euclidean distance of input data x i and w i , p , q is sought. This unit is defined as c. The weights w i , p , q of neighbor units inside of c are updated. Moreover, w p , q , j are updated using teaching signals T j at time t.
Let w r , s ( t ) and w s , k ( t ) respectively denote weights between input layer r ( 1 r R ) and Kohonen layer unit s ( 1 s S ) and weights between Grossberg layer k ( 1 k K ) to Kohonen layer unit s at time t. Before learning, w r , s ( t ) are initialized randomly. Using the Euclidean distance between y r ( t ) and w r , s ( t ) , a winner unit c s ( t ) is sought for the following.
c s ( t ) = argmin 1 s S r = 1 R ( y r ( t ) u r , s ( t ) ) 2 .
A neighborhood region ψ c p n ( t ) is set from the center of c s as the following.
ψ c p n ( t ) = ψ c p n ( 0 ) · S · 1 t Z c p n + 0.5 ,
where Z c p n stands for the maximum learning iteration. Subsequently, w r , s and w s , k in ψ c p n ( t ) is updated as shown below.
w r , s ( t + 1 ) = w r , s ( t ) + β ( t ) ( y r ( t ) w n , m ( t ) ) ,
w s , k ( t + 1 ) = w s , k ( t ) + γ ( t ) ( z l ( t ) w n , m j ( t ) ) ,
where β ( t ) and γ ( t ) are learning coefficients that decrease along with learning progress.
This process is repeated until the maximum number of learning iterations is reached. Finally, unit labels L j are decided as a result of maximized w s , k against the unit k on Grossberg layer. After learning, CPNs provide a recognition result based on winner-take-all competition for a set of input signals.

4. Datasets

4.1. Target Behavior Patterns

Figure 12 depicts photographs in each pose for the target behavior patterns. The following are features and estimated sensor responses for the respective patterns.
SLP 
Sleeping: a subject is sleeping on a bed normally.
LOS 
Longitudinal sitting: a subject is sitting longitudinally on a bed after rising.
LAS 
Lateral sitting: a subject is sitting laterally on a bed after turning the body from longitudinal sitting.
TES 
Terminal sitting: a subject is sitting in the terminal position on a bed trying to leave a bed. Rapid and correct detection is necessary because of the terminal situation for leaving a bed.
LEB 
Left a bed: a subject has left the bed. Herein, sensor responses disappear in the status of losing consciousness or a life crisis. For such circumstances, monitoring devices such as electrocardiographs are used. Such circumstances are beyond our prediction targets.

4.2. Datasets Obtained for Conditions

We obtained bed-leaving behavior pattern datasets at a simulated experimental room that resembled a clinical site. We used an electro-actuation bed (KA-36121R; Paramount Bed Co., Ltd.) equipped with three actuators for reclining the back and feet panels and for adjusting its height. We obtained datasets without using the back plate reclining function for avoiding load pattern changes on the bed. The route for a subject to leave from the bed is restricted to one side with two attached safety rails.
The subjects were 10 persons: nine men and one woman. Table 1 presents profiles of all subjects. We set two protocols to obtain different characteristic datasets. The first protocol comprises the same procedures as those of our earlier study [6]. Each subject switched their behavior patterns of five types with 20 s intervals. For this study, we call them continuous datasets (CDS). We obtained 10 sets of CDS from each subject. Herein, the data sampling rate was set to 50 Hz.
The second protocol comprises behavior patterns as discontinuous datasets. The order and duration of bed-leaving behavior patterns are various along with body parameters and health conditions in each subject. For example, behavior patterns from LOS to SLP without changing to LAS occurred frequently. We consider that the employment of our sensor system at a clinical site gives rise to dramatically lower recognition accuracy. As a basic consideration for aiming a practical application, we obtained datasets without fixed sequences or time intervals for subjects of their basic behavior pattern transitions. We designate them as discontinuous datasets (DDS). The data acquisition period was set to 15 min per person.
For calculating recognition accuracy, ground truth (GT) labels are indispensable to DDS. However, the burden to allocate GT labels is excessively high because each subject moved freely for 15 min. Therefore, we used a depth camera to record video images for annotation. We allocated GT labels to DDS manually from video image observation.

4.3. Sensor Output Signals

Figure 13 depicts output signals from the pad sensors. The vertical and horizontal axes respectively represent the output voltage and translation time in seconds. The voltage range is up to ± 1.2 V. Along with time transitions, each subject changes their behavior patterns in the order as depicted in Figure 12. During 60 s from the initial point, a subject was sleeping on the bed with turning of the body. The output signals in respective channels were changed slightly.
The output signals from CH5, which correspond to the bed center, have come to be high in LOS. This tendency demonstrates that the upper body weight was concentrated to this channel. The output signals from CH4 are salient in LAS because of turning the body to the lateral bed direction for the transition to LEB. Salient output signals in TES are changed from CH4 to CH3. The output signals in LEB are disappeared completely.
Figure 14 depicts output signals from the rail sensor. No salient signals are presented in SLP, LOS, and LAS. The output signals are noticeable in TES. After the boundary between TES and LEB, no output signals are presented. The output signal tendency from the rail sensor indicates a selective feature in TES compared with those of other behavior patterns.

5. Evaluation Experiment

5.1. Evaluation Criteria

Let T n u m and G n u m respectively be the total numbers of test signals and GT labels. For evaluation criteria, the recognition accuracy R for a test dataset is defined as
R = T n u m G n u m × 100 [ % ] .
Herein, we define mean R as R m e a n . Moreover, we define R of SLP, LOS, LAS, TES, and LEB as R S L P , R L O S , R L A S , R T E S , and R L E B .
We used K-Fold cross-validation for evaluating results along with machine-learning and evolutional-learning approaches. Herein, we set K = 5 based on the results of earlier studies [20,21].
We conducted four evaluation experiments using behavior pattern datasets of two types. Table 2 summarizes experimental details.

5.2. Comparison Results of Learning Algorithms

We evaluated recognition accuracy of machine-learning algorithms using CDS. Table 3 depicts comparison results. We used CPNs as a discriminator for our earlier study [6]. As a comparison result, R m e a n using CPNs was 75.4%, which is the second lowest of six algorithms. In contrast, R m e a n using RF was 91.1% that the highest. For all behavior patterns except of LEB, R m e a n using RF were higher than those of other algorithms. For LEB, LSVM provided the highest recognition accuracy.
As a commonplace tendency for all algorithms, R S L P achieved the highest. In contrast, R L O S , R L A S , and R T E S are smaller than 90.0%. We consider that the recognition accuracies for these three behavior patterns must be high because our system intends to predict bed-leaving behavior. Therefore, we examine measures to improve recognition accuracies for these three behavior patterns from the viewpoint of datasets and discriminators.

5.3. Experimental Results for CDS

Using CDS, we evaluated the capabilities of our originally developed sensors of two types. Figure 15 depicts comparison results of R m e a n for the solely used pad sensors and the combined sensors with pad sensors and a rail sensor. Comparison of the results shows that R m e a n of the combined sensors is 4.0% higher than that of the pad sensors. Recognition accuracies in respective behavior patterns are improved from 0.4% up to 12.3%. Particularly, R T E S exhibits the maximum improvement. Moreover, the experiment result demonstrates that the rail sensor, which detects the grasp of a safety rail in TES intensively, contributes to the improvement of the overall recognition accuracy.
As shown in Figure 14, the output voltage from the piezoelectric film in the rail sensor occasionally exceeded 0.1 V when a subject was in TES. We infer that sufficient recognition accuracy is obtainable using the rail sensor solely if we set the target-only TES. However, we consider that false recognition occurred between TES and LEB, especially for the immediate transition from TES to LEB. Although sensor signals should be approximately 0 V in LEB, sharp signals are outputted. This tendency occurs when a subject tries to leave from the bed with holding or shaking of a safety rail, which enhances vibration. Therefore, we infer that the combination between the rail sensor and the pad sensors is the best for a practical use because of avoiding the problem that is occurred in the case of a solely used rail sensor. Moreover, false recognition to LEB is avoidable accurately because the recognition accuracy of the combined sensors obtained relative superior improvement for LAS, TES, and LEB.
We examine detailed recognition results obtained using a confusion matrix. Table 4 and Table 5 respectively present confusion matrixes for the pad sensors and the combined sensors. Specifically examining TES with the highest recognition accuracy, the number of false recognition instances was reduced in all behavior patterns after appending the rail sensor. Particularly, the number of false recognitions to LEB was reduced from 30 signals to 4 signals. We infer that the correct discrimination between TES and LEB engenders improved recognition accuracy.
Although the use of the rail sensor was aimed at improved R T E S , R L A S was improved from 81.8%–90.6% as a subsidiary contribution. The number of false recognition instances was reduced, except for SLP. Particularly, the number of false recognition instances for TES was decreased to 26 times. The improved R T E S after appending the rail sensor produces decreased false recognition instances of LAS. As a result, correct recognition instances, except for TES, produces an improved R m e a n . We demonstrated that the addition of a sensor that can reliably recognize a single posture engendering improvement of R m e a n . The combined sensors have clear benefits for bed-leaving behavior recognition when compared to other configurations.

5.4. Experiment Results for DDS

Table 6 denotes R m e a n for each subject for DDS. Compared with that of CDS as depicted in Figure 15, recognition accuracies were lower in all five behavior patterns. Especially, R L A S was drastically lower. This experimentally obtained result revealed that recognition accuracy, except for R S L P , which was the highest accuracy among five behavior patterns, was strongly affected by randomness in DDS.
The disparity of detailed recognition accuracies in respective subjects was from 43.3% as the lowest to 95.3% as the highest. Compared with the mean accuracy of 75.2%, the recognition accuracies were above for seven subjects and below for the remaining three subjects. Therefore, significant lower accuracy for specific subjects decreased the overall recognition accuracy. We infer that this tendency is influenced by individual differences in behavior patterns. Each subject played predetermined behavior patterns in CDS and free behavior patterns in DDS. We consider that the recognition accuracy was significantly lower because behavior patterns were various among the subjects in DDS.
Table 7 depicts the confusion matrix for all subjects. Table 8 presents the confusion matrix for Subject C, with the lowest recognition accuracy. Numerous signals were falsely recognized to SLP. Particularly, correct recognition in LEB was merely 2 of 179 signals. Other signals were falsely recognized to SLP. This trend demonstrated that false recognition occurred in the state that SLP and other behavior patterns were not distinguished. In DDS, R m e a n was dramatically lower in particular subjects. We consider that this is attributable to behavior variations of subjects between learning datasets and test datasets. Therefore, we consider that R m e a n improves if learning datasets contain diversity.

5.5. Integration of Learning Datasets

For applying our system at a clinical site, learning dataset preparation for each subject might be troublesome and time-consuming. In addition, preserving accuracy has come to be a problem of system reliability because generalization was dropped in DDS. Therefore, we attempt to construct a generic classifier combined with learning datasets for all subjects as depicted in Table 1. For this experiment, we used CDS for learning and DDS for validation.
Table 9 depicts recognition accuracies obtained before and after the integration of learning datasets. Recognition accuracies in six of ten subjects were improved. Particularly, recognition accuracies of Subjects C and G were improved notably. Although recognition accuracies of four subjects dropped, three of them remained dropped percentages up to 2.0%. However, with regard to Subject E, R m e a n decreased 5.6 percentage points, especially for steep drops in R L O S and R L A S .
Figure 16 depicts the comparison result of recognition accuracy for each behavior pattern. All recognition accuracies were improved in the integrated datasets. Particularly, R L E B improved 15.8 percentage points. Table 10 denotes the confusion matrix for all subjects after the integration of validation datasets. Compared with the results depicted in Table 7, the numbers of false recognition instances were lower in all behavior patterns. Although the false recognition instances for SLP were numerous in Table 7, these results were improved in Table 10. Moreover, false recognition instances appeared frequently in behavior patterns that were close to GT labels except of LAS. We consider that it is a challenging task to recognize intermediate states of two neighbor behavior patterns which change their body among respective behavior patterns. To reduce false recognition instances, we infer that it is necessary to use a method to maintain the previous status until the recognition becomes stable if a present recognition result differs from the previous one. In contrast, false recognition instances of LAS were divided to the other four behavior patterns evenly. We infer that this tendency makes it difficult to distinguish LAS from other behavior patterns.
For the integration of learning datasets among subjects, we achieved not only maintenance of generalization performance for DDS, but also prevention of false recognition extremely. Although recognition accuracy was improved overall, several subjects showed low recognition accuracy. We conclude that this learning strategy does not ensure whole improve recognition accuracy. For improving this system, we consider that important prerequisites remain as the following: to improve generalization capability for collecting numerous datasets from numerous subjects, to change learning datasets along with subject profiles in terms of height or weight, to perform incremental learning without stopping the system until sufficient accuracy is obtained, and to construct learning datasets specialized to each subject temporally.

5.6. Discussion

For this study, we developed the sensor system that is inexpensive, convenient, and maintainable with advanced QoL for care recipients. Actually, as described in the introduction, using a camera as a bed monitoring sensor can provide a low-cost system that can obtain much information from subjects. However, it is still a challenging task to predict behavior patterns obtained from images, even when state-of-the-art computer vision technologies are used. For example, as a deep-learning-based approach, OpenPose [22] does not handle sleeping or laying positions. Therefore, medical staff members must observe images directly. Moreover, we have to consider aspects of human rights and QOL, especially, it is impossible not only to monitor numerous subjects simultaneously with a few operators but also to recognize behavior patterns related to bed-leaving using only sensor responses, even when detailed analyses are conducted, because behavior patterns differ among people. Furthermore, monitoring using a camera imposes a mental load on patients because they feel as though they are under surveillance all daytime and nighttime. However, we have not evaluated this sensor system at hospitals or care facilities. We would like to subjective and objective evaluation to validate our sensor system in a clinical environment without the use of cameras.

6. Conclusions

This paper presented the bed-leaving behavior recognition system that comprises pad sensors installed on a bed, a rail sensor inserted in a safety rail, and a behavior pattern recognizer based on machine learning algorithms. We obtained benchmark datasets of continuous and discontinuous behavior patterns from 10 subjects. The experimentally obtained results revealed that RF obtained the highest recognition accuracy in our benchmark datasets. Compared with our earlier study, results obtained using CPNs, the recognition accuracies were improved by 20.7% for LOS and 21.9% for TES. After appending the rail sensor to the pad sensors, the mean recognition accuracy improved 4.0 percentage points, including a 12.3 percentage point improvement for TES. Regarding the difference in behavior pattern transitions, the mean recognition accuracy decreased 22.9 percentage points in discontinuous datasets. For improving the generalization of our system, datasets of all subjects were combined for learning. The mean recognition accuracy improved 4.8 percentage points, especially improved considerably for two subjects.
For our future work, we aim to apply our proposed sensor system to a clinical site such as care facilities or single senior’s homes for security and safety observation that simultaneously maintains QOL and privacy. We will achieve steady detection to expand the application range of our method and thereby increase the number of subjects. Additionally, we must demonstrate the system reliability for conducting long-term monitoring.

Author Contributions

Conceptualization, H.M. and N.S.; methodology, H.M. and K.N.; prototype sensor development, N.S.; embedded software, H.M. and K.N.; validation, N.S. and K.S.; formal analysis, H.M.; investigation, N.S.; resources, H.M.; data curation, K.N. and K.S.; writing—original draft preparation, H.M. and H.W.; writing—review and editing, H.M. and H.W.; visualization, K.N. and K.S.; supervision, H.W.; project administration, H.M.; funding acquisition, N.S. and H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This study was conducted with the support of the Ministry of Internal Affairs and Communications, Strategic Information and Communications R&D Promotion Program (SCOPE 152302001) in Japan.

Acknowledgments

We are grateful to Katsumi Wasaki and Masaaki Niimura of Shinshu University for helpful discussions related to this study as our joint study project. Moreover, we would like to thank ten people who cooperated as subjects to produce datasets for this study.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CDSContinuous DataSets
CPNsCounter Propagation Networks
DDSDiscontinuous DataSets
GTGround Truth
IRInfRared
kNNk-Nearest Neighbor
LASLAteral Sitting
LEBLEft a Bed
LOSLOngitudinal Sitting
LSVMsLinear Support Vector Machines
NBNaive Bayes
PETPolyethylene Terephthalate
QoLQuality of Life
RBF-SVMsRadial Basis Function Support Vector Machines
SOMsSelf-Organizing Maps
SLPSLeePing
SVMsSupport Vector Machines
TESTErminal Sitting
RFRandom Forests

References

  1. Takahashi, Y. Research on the Human Resources Problems due to Occupational Characteristics of the Nursing Profession. Ph.D. Thesis, Shobi Journal of Policy Studies, Shobi University, Kawagoe city, Japan, June 2016. [Google Scholar]
  2. Matsumoto, K. A Study on Working Environment and Job Satisfaction of Professional Caregivers and Their Turnover; Bulletin of Kumamoto University: Kumamoto city, Japan, 2011. [Google Scholar]
  3. Mitadera, Y.; Akazawa, K. Analysis of Incidents Occurring in Long-Term Care Insurance Facilities. Bull. Soc. Med. 2013, 30, 123–134. [Google Scholar]
  4. Asano, H.; Suzuki, T.; Okamoto, J.; Muragaki, Y.; Iseki, H. Bed Exit Detection Using Depth Image Sensor. J. TWMU 2014, 84, 45–53. [Google Scholar]
  5. Kawamura, K.; Okuno, Y.; Hirose, Y.; Ozone, K.; Tomita, K. Detection of Various Postures and Gait Using a Wearable Triaxial Accelerometer. J. Phys. Ther. Sci. 2017, 32, 435–438. [Google Scholar] [CrossRef] [Green Version]
  6. Madokoro, H.; Shimoi, N.; Sato, K. Unrestrained Multiple-Sensor System for Bed-Leaving Detection and Prediction. Nurs. Health 2015, 3, 58–68. [Google Scholar] [CrossRef] [Green Version]
  7. Madokoro, H.; Nakasho, K.; Shimoi, N.; Woo, H.; Sato, K. Invisible Sensors for Early Prediction of Discontinuous Bed-Leaving Behavior Patterns. In Proceedings of the 5th International Conference on Sensors Engineering and Electronics Instrumentation Advances, Tenerife, Spain, 25–27 September 2019; pp. 74–80. [Google Scholar]
  8. Madokoro, H.; Shimoi, N.; Sato, K.; Xu, L. Development of Unrestrained and Hidden Sensors Using Piezoelectric Films for Recognition and Prediction of Bed-Leaving Behaviors. In Proceedings of the International Symposium on Stability, Vibration, and Control of Machines and Structures, Budapest, Hungary, 16–18 June 2016; pp. 133–144. [Google Scholar]
  9. Motegi, M.; Matsumura, N.; Yamada, T.; Muto, N.; Kanamaru, N.; Shimokura, K.; Abe, K.; Morita, Y.; Katsunishi, K. Analyzing Rising Patterns of Patients to Prevent Bed-related Falls (Second Report). Trans. Jpn. Soc. Health Care Manag. 2011, 12, 25–29. [Google Scholar]
  10. Ogawa, K. Evidence-Based Nursing Ergonomics and Body-Mechanics; Tokyo Denki University Press: Tokyo, Japan, 2008. [Google Scholar]
  11. Ministry of Health, Labour and Welfare in Japan National Health and Nutrition Survey Report. 2012. Available online: https://www.mhlw.go.jp/bunya/kenkou/eiyou/h24-houkoku.html (accessed on 17 January 2020).
  12. Hatsukari, T.; Shiino, T.; Murai, S. The Reduction of Tumbling and Falling Accidents Based on a Built-in Patient Alert System in the Hospital Bed. J. Sci. Lab. 2012, 88, 94–102. [Google Scholar]
  13. Frank, E.; Trigg, L.; Holmes, G.; Witten, I.H. Naive Bayes for regression. Mach. Learn. 2000, 41, 5–15. [Google Scholar] [CrossRef] [Green Version]
  14. Altman, N.S. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 1992, 46, 175–185. [Google Scholar] [CrossRef] [Green Version]
  15. Vapnik, V.; Lerner, A. Pattern Recognition Using Generalized Portrait Method. Automot. Rem. Control 1963, 24, 774–780. [Google Scholar]
  16. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  17. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  18. Bayes, T. An Essay towards Solving a Problem in the Doctrine of Chances. Philos. Trans. R. Soc. B 1763, 53, 370–418. [Google Scholar] [CrossRef]
  19. Kohonen, T. Self-Organizing Maps; Springer Series in Information Sciences; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar] [CrossRef]
  20. Kohavi, R. A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 20–25 August 1995; Volume 2, pp. 1137–1143. [Google Scholar]
  21. Arlot, S.; Celisse, A. A Survey of Cross-Validation Procedures for Model Selection. Stat. Surv. 2010, 4, 40–79. [Google Scholar] [CrossRef]
  22. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
Figure 1. Whole system structure.
Figure 1. Whole system structure.
Sensors 20 01415 g001
Figure 2. Interior architecture and pad sensor appearance.
Figure 2. Interior architecture and pad sensor appearance.
Sensors 20 01415 g002
Figure 3. Interior architecture and appearance of the rail sensor.
Figure 3. Interior architecture and appearance of the rail sensor.
Sensors 20 01415 g003
Figure 4. Load test: (a) fixture, (b) schematic diagram, (c) load test, and (d) the load test machine (Multi Force Analyzer FWT-1000; DigiTech Co. Ltd., Osaka city, Japan).
Figure 4. Load test: (a) fixture, (b) schematic diagram, (c) load test, and (d) the load test machine (Multi Force Analyzer FWT-1000; DigiTech Co. Ltd., Osaka city, Japan).
Sensors 20 01415 g004
Figure 5. Relation between output voltage and load of sensors.
Figure 5. Relation between output voltage and load of sensors.
Sensors 20 01415 g005
Figure 6. Relation between output voltage and load speed.
Figure 6. Relation between output voltage and load speed.
Sensors 20 01415 g006
Figure 7. Relation between output voltage and load of side and orientation.
Figure 7. Relation between output voltage and load of side and orientation.
Sensors 20 01415 g007
Figure 8. Sensor configuration.
Figure 8. Sensor configuration.
Sensors 20 01415 g008
Figure 9. Sensor installation.
Figure 9. Sensor installation.
Sensors 20 01415 g009
Figure 10. Sensor measurement board with ZigBee module.
Figure 10. Sensor measurement board with ZigBee module.
Sensors 20 01415 g010
Figure 11. Calculation of signal features.
Figure 11. Calculation of signal features.
Sensors 20 01415 g011
Figure 12. Target behavior patterns.
Figure 12. Target behavior patterns.
Sensors 20 01415 g012
Figure 13. Output signals from pad sensors.
Figure 13. Output signals from pad sensors.
Sensors 20 01415 g013
Figure 14. Output signals from rail sensor.
Figure 14. Output signals from rail sensor.
Sensors 20 01415 g014
Figure 15. Comparison results of recognition accuracies.
Figure 15. Comparison results of recognition accuracies.
Sensors 20 01415 g015
Figure 16. Comparison result of recognition accuracy in each behavior pattern.
Figure 16. Comparison result of recognition accuracy in each behavior pattern.
Sensors 20 01415 g016
Table 1. Profile of subjects.
Table 1. Profile of subjects.
SubjectABCDEFGHIJ
Height [cm]161169177168170167177178170165
Weight [kg]51669151606184807870
SexFMMMMMMMMM
Table 2. Experimental conditions.
Table 2. Experimental conditions.
SectionLearning DatasetTest DatasetDiscriminator
Section 5.2CDSCDSEach Subject
Section 5.3CDSCDSEach Subject
Section 5.4CDSDDSEach Subject
Section 5.5CDSDDSAll Subject
Table 3. Comparison results of learning algorithms [%].
Table 3. Comparison results of learning algorithms [%].
Algorithm R SLP R LOS R LAS R TES R LEB R mean
NB98.732.227.425.80.953.2
kNN98.585.079.580.886.189.0
LSVMs96.263.864.451.189.478.2
RBF-SVMs98.062.464.667.988.881.0
RF98.888.681.884.588.591.1
CPNs88.167.959.762.575.975.4
Table 4. Confusion matrixes for the results of pad sensors.
Table 4. Confusion matrixes for the results of pad sensors.
SLPLOSLASTESLEB
SLP10475332
LOS2144314319
LAS1293193614
TES381832130
LEB261218292
Table 5. Confusion matrixes for the results of pad sensors and rail sensors.
Table 5. Confusion matrixes for the results of pad sensors and rail sensors.
SLPLOSLASTESLEB
SLP10523302
LOS1944914117
LAS1333541010
TES1283654
LEB210123303
Table 6. Recognition accuracies in each subject for DDS [%].
Table 6. Recognition accuracies in each subject for DDS [%].
Subject R SLP R LOS R LAS R TES R LEB R mean
A67.165.865.669.484.568.9
B97.851.422.879.678.882.4
C95.266.08.810.91.143.3
D98.984.842.289.694.384.6
E93.978.160.388.182.983.7
F99.889.295.190.395.295.3
G51.343.947.089.115.051.6
H98.164.051.178.379.773.5
I95.071.057.761.788.986.5
J95.955.764.674.030.382.1
Average89.367.051.573.165.175.2
Table 7. Confusion matrix of all subjects.
Table 7. Confusion matrix of all subjects.
SLPLOSLASTESLEB
SLB4097356142526
LOS31212831621196
LAS179201795116164
TES2952172974135
LEB183552658641
Table 8. Confusion matrix of Subject C.
Table 8. Confusion matrix of Subject C.
SLPLOSLASTESLEB
SLP33517000
LOS521711134
LAS141416911
TES28600350
LEB1770002
Table 9. Recognition accuracies before and after integration of learning datasets [%].
Table 9. Recognition accuracies before and after integration of learning datasets [%].
SubjectBeforeAfterDifference
A68.967.1−1.8
B82.486.94.5
C43.377.934.6
D84.684.70.1
E83.778.1−5.6
F95.393.5−1.8
G51.666.414.8
H73.577.43.9
I86.587.00.5
J82.180.8−1.3
Average75.280.04.8
Table 10. Confusion matrix.
Table 10. Confusion matrix.
SLPLOSLASTESLEB
SLP419031053667
LOS31512841621390
LAS150131909117148
TES4845141116697
LEB21303774801

Share and Cite

MDPI and ACS Style

Madokoro, H.; Nakasho, K.; Shimoi, N.; Woo, H.; Sato, K. Development of Invisible Sensors and a Machine-Learning-Based Recognition System Used for Early Prediction of Discontinuous Bed-Leaving Behavior Patterns. Sensors 2020, 20, 1415. https://doi.org/10.3390/s20051415

AMA Style

Madokoro H, Nakasho K, Shimoi N, Woo H, Sato K. Development of Invisible Sensors and a Machine-Learning-Based Recognition System Used for Early Prediction of Discontinuous Bed-Leaving Behavior Patterns. Sensors. 2020; 20(5):1415. https://doi.org/10.3390/s20051415

Chicago/Turabian Style

Madokoro, Hirokazu, Kazuhisa Nakasho, Nobuhiro Shimoi, Hanwool Woo, and Kazuhito Sato. 2020. "Development of Invisible Sensors and a Machine-Learning-Based Recognition System Used for Early Prediction of Discontinuous Bed-Leaving Behavior Patterns" Sensors 20, no. 5: 1415. https://doi.org/10.3390/s20051415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop