Machine Learning-Based Classification of Human Behaviors and Falls in Restroom via Dual Doppler Radar Measurements

This study presents a radar-based remote measurement system for classification of human behaviors and falls in restrooms without privacy invasion. Our system uses a dual Doppler radar mounted onto a restroom ceiling and wall. Machine learning methods, including the convolutional neural network (CNN), long short-term memory, support vector machine, and random forest methods, are applied to the Doppler radar data to verify the model’s efficiency and features. Experimental results from 21 participants demonstrated the accurate classification of eight realistic behaviors, including falling. Using the Doppler spectrograms (time–velocity distribution) as the inputs, CNN showed the best results with an overall classification accuracy of 95.6% and 100% fall classification accuracy. We confirmed that these accuracies were better than those achieved by conventional restroom monitoring techniques using thermal sensors and radars. Furthermore, the comparison results of various machine learning methods and cases using each radar’s data show that the higher-order derivative parameters of acceleration and jerk, and the motion information in the horizontal direction are the efficient features for behavior classification in a restroom. These findings indicate that daily restroom monitoring using the proposed radar system accurately recognizes human behaviors and allows early detection of fall accidents.


1-6923
Depart
ent of Electronic and Computer Engineering
Ritsumeikan University
525-8577KusatsuJapan

Masao Masugi masugi@fc.ritsumei.ac.jp 
Department of Electronic and Computer Engineering
Ritsumeikan University
525-8577KusatsuJapan

Machine Learning-Based Classification of Human Behaviors and Falls in Restroom via Dual Doppler Radar Measurements †
22 February 2022E168ED106CB18AF35F6F6E98ABFCB2C410.3390/s22051721Received: 2 February 2022 Accepted: 20 February 2022Saho, K.Hayashi, S.Tsuyama, M.Meng, L.Masugi, M. Machine Learning-Based Classification of Human Behaviors Doppler radar applicationrestroomfall detectionhuman behavior classificationremote monitoring
This study presents a radar-based remote measurement system for classification of human behaviors and falls in restrooms without privacy invasion.Our system uses a dual Doppler radar mounted onto a restroom ceiling and wall.Machine learning methods, including the convolutional neural network (CNN), long short-term memory, support vector machine, and random forest methods, are applied to the Doppler radar data to verify the model's efficiency and features.Experimental results from 21 participants demonstrated the accurate classification of eight realistic behaviors, including falling.Using the Doppler spectrograms (time-velocity distribution) as the inputs, CNN showed the best results with an overall classific tion accuracy of 95.6% and 100% fall classification accuracy.We confirmed that these accuracies were better than those achieved by conventional restroom monitoring techniques using thermal sensors and radars.Furthermore, the comparison results of various machine learning methods and cases using each radar's data show that the higherorder derivative parameters of acceleration and jerk, and the motion information in the horizontal direction are the efficient features for behavior classification in a restroom.These findings indicate that daily restroom monitoring using the proposed radar system accurately recognizes human behaviors and allows early detection of fall accidents.

Introduction

Aging is a global phenomenon responsible for various problems related to shortened health expectancy and the sudden death of elderly people [1].Therefore, monitoring systems for early detection of accidents and abnormal behaviors in elderly adults, such as falling, have recently been develop d based on sensors and Internet-of-Things technologies [2].However, such systems are not used inside restrooms due to privacy concerns.The early detection of falls and abnormal behaviors in restrooms is important because it is one of the dangerous s aces in the home where elderly people are likely to fall [3].Even though accelerometry-based approaches for fall detection in restrooms have been proposed [4,5], they require the subjects to wear the sensor devices.

Few studies have investigated camera-based approaches for the remote monitoring of restrooms [6] based on the i ag -processing-based fall detection technique [7,8].However, their measurement accuracy depends on lighting conditions and the subjects' clothing.In addition, installing cameras in restrooms is challenging because of privacy issues.Hence, Sensors 2022, 22, 1721 2 of 15 investigations of restroom monitoring have been limited.Even though infrared, thermal, sensor-based posture estimation in restrooms has been proposed [9,10], its measurement accuracy depends on the temperature o

th
toilet and of the subject's clothing.

Radar technology is a promising candidate method for solving the above-described problems because it does not invade privacy during measurement and is not affected by temperature conditions or clothing.Therefore, radar-based motion recognition and fall detection are active research areas.Various approaches based on machine learning of radar images, such as time-Doppler, time-range, or range-Doppler images, have been proposed and demonstrated in realistic situations [11][12][13].However, research on the use of radar in restrooms is quite limited.Though the detection of falls in restr

ms using rada
systems or the classification of normal and abnormal behaviors by range (distance) information [14,15] have been studied based on classical signal detection or discriminant analysis methods, it is difficult to determine what is happening in a restroom because only abnormal behaviors, such as falls, are detected.Mo eover, their accuracy and practicality were both insufficient.

To solve these problems, our recent paper reported the long short-term memory (LSTM)based classification of eight types of behaviors and falls in a restroom with approximately 80% accuracy using the Doppler radar-measured velocity time series [16].

As a significant extension of our previous study [16], this study presents a more accurate classification using the convolutional neural network (CNN)-based approach and investigates the accurate classification of human behavior in restrooms via various machine-learning methods.The contributions of this study are as follows:

•

The efficient implementation of Doppler radars and experimental examples for privacyprotected restroom monitoring were provided for the realistic environment; this is a significant contribution because there are only several limited reports on realistic restroom monitoring.


•

Efficient classification models and types of their input data for the radar-based restroom-monitoring system were clarified via the thorough comparison of the radarbased motion recognition approaches.


•

The classification of human behaviors and falls in a restroom with above 95% was demonstrated.This result shows significant improvement over other conventional studies on radar-based restroom monitoring [14][15][16].

This paper is an extended version of our conference paper [17] that simply presented the results for the CNN-based approach.In this study, we added the results for the comparison with other various machine learning-based classification meth ds, the details of the implementation of the classification methods, comparison with other conventional studies, and investigation to elucidate the efficient features.


Related Work

The machine learning-based human motion classification has been widely investigated in various sensing technologies such as cameras, depth sensors, and accelerometers [18][19][20][21].For various applications, the practicality and versatility of various machine-learning models [22][23][24] have been demonstrated.

For the field of radar technology, the machine learning methods that have been established for the abovementioned studies have also been applied to human motion recognition using the Doppler and/or range information obtained via radar sensing [11,25,26].For example, radar-based fall detection has been widely studied [27,28] and various efficient methods using machine learning techniques have been proposed [29,30].In recent years, accurate fall detections using radars have been achieved with machine learning methods such as CNN [12,[31][32][33][34], LSTM [13,34], random forest (RF) [35], and support vector machine (SVM) [36].These classification techniques are properly selected based on the features of the problem and the objectives of the radar data analysis.The CNN-based classification technique has achieved the accurate classification of human motion using Doppler radar spectrograms [11,12,35,37].Additionally, the LSTM-based technique has achieved better accuracy for the continuous classification problem [13,37,38].Although these techniques can achieve relatively accurate classifications, the mechanisms and/or factors of the classifications are generally unclear.Thus, the classical motion parameter-based approaches are still important in contemporary radar technology for developing the motion recognition system whose mechanism and performance are guaranteed [29,37,39,40].The methodology for the design of the radar-based motion recognition system using these various machine learning approaches is being established for various types of experimental data.

However, as stated in the Introduction, most studies on motion recognition and fall detection studies using radar have not considered their application to the restroom.Thus, there are only several studies on radar-based restroom monitoring [14][15][16].For example, in [15], the classification accuracy of seven types of behaviors and falls was approximately 60%.The low accuracy of conventional radar systems may be due either to the use of range information or to the use of classical detection/classification methods.Thus, the efficient machine learning methods that are suitable for radar-based restroom monitoring and their appropriate input data have not been investigated at all.For this purpose, our previous study [16] achieved accurate classification of eight human behaviors and falls in the restroom.This study aimed to extend this previous study with respect to the classification accuracy and to clarify the efficient models and features for behavior classification in radar-based restroom monitoring.


Experiments for Dataset Generation


Doppler Radar Experiments

Figure 1 shows the experimental site and outline of the measurement system.Twentyone healthy young men (age: 22.4 ± 1.1 years, height: 173.8 ± 5.1 cm) consented to participate in this study and were instructed regarding the testing procedures prior to the experiments.Informed consent was obtained from all participants.Each participant performed the following eight types of behaviors three times: (a) opening the toilet lid, (b) pulling down the pants, (c) sitting, (d) taking the toilet paper, (e) standing, (f) pulling up the pants, (g) closing the toilet lid, and (h) falling.Falling is defined as the motion of falling forward from a seated position, which is one of the realistic falling motions in restrooms [6].For example, a person seated on the toilet falls when leaning forward slowly.

Sensors 2022, 22, x FOR PEER REVIEW 3 of 16 motion using Doppler radar spectrograms [11,12,35,37].Additionally, the LSTM-based technique has achieved better accuracy for the continuous classification problem [13,37,38].Although these techniques can achieve relatively accurate classifications, the mechanisms and/or factors of the classifications are generally unclear.Thus, the classical motion parameter-based approaches are still important in contemporary radar technology for developing the motion recognition system whose mechanism and performance are guaranteed [29,37,39,40].The methodology for the design of the radar-based motion recognition system using these various machine learning approaches is being established for various types of experimental data.However, as stated in the Introduction, most studies on motion recognition and fall detection studies using radar have not considered their application to the restroom.Thus, there are only several studies on radar-based restroom monitoring [14][15][16].For example, in [15], the classification accuracy of seven typ

information or to the use of clas
ical detection/classification methods.Thus, the efficient machine learning methods that are suitable for radar-based restroom monitoring and their appropriate input data have not been investigated at all.For this purpose, our previous study [16] achieved accurate classification of eight human behaviors and falls in the restroom.This study aimed to extend this previous study with respect to the classification accuracy and to clarify the efficient models and features for behavior classification in radar-based restroom monitoring.


Experiments for Dataset Generation


Doppler Radar Experiments

Figure 1 shows the experimental site and outline of the measurement system.Twenty-one healthy young men (age: 22.4 ± 1.1 years, height: 173.8 ± 5.1 cm) consented to participate in this study and were instructed regarding the testing procedures prior to the experiments.Informed consent was obtained from all participants.Each participant performed the following eight types of behaviors three times: (a) opening the toilet lid, (b) pulling down the pants, (c) sitting, (d) taking the toilet paper, (e) standing, (f) pulling up the pants, (g) closing the toilet lid, and (h) falling.Falling is defined as the motion of falling forward from a seated position, which is one of the realistic falling motions in restrooms [6].For example, a person seated on the toilet falls when leaning forward slowly.We used 24 GHz continuous-wave radars (ILT office, BSS-110) with ±14 • plane directivity mounted as shown in Figure 1.The radars were installed above (ceiling radar) and behind (wall radar) the participant.The ceiling and wall radars measured the motion along the vertical and horizontal directions, respectively.The Doppler radar transmitted a 24 GHz sinusoidal wave with an effective isotr

z, and a measurement velocity range of −1.875-1.875m/s.


Generation
of Spectrogram Dataset

Similar to our previous study [41], the short-time Fourier transforms (STFT) of the received signals were calculated to generate the spectrogram images (time-velocity-power distribution images) as

ollows.First, we removed the zero-Doppler
frequency components from the received signals using a one-dimensional Butterworth high pass filter with a cutoff frequency of 30 Hz to eliminate echoes from static objects such as walls and toilet seats.).Finally, we removed the components with a received power density of less than 0 dB/Hz, assuming that these components corresponded to random noises.

Figures 2 and 3 show examples of the generated spectrograms for all

haviors of the ceiling and wall radars, respectively.The motion towa
d the ceiling and back of the subject is positive for the ceiling and wall radars that measure the head and torso, respectively.For example, we confirmed the significant negative velocity components in Figures 2f and 3f

at correspond to the fall forward motion
or behavior (f).Similarly, the motion characteristics of each of the other behaviors could also be confirmed.For behaviors (d) and (f), no characteristic velocity components were obtained because they did not involve large motions compared with the other behaviors.Although the different features of each behavioral spectrogram were confirmed to some extent, some behaviors were difficult to classify.For example, the spectrograms of the behaviors (b) and (e) in Figure 2 have relatively similar characteristics.Therefore, this study aimed to classify the behaviors using various machine learning methods.
Sensors 2022, 22, x FOR PEER REVIEW 5 of 16 (a) (b) (c) (d) (e) (f) (g) (h)

Implementation of the Machine Learning-Based Classification Methods

This study implemented three types of classification methods and compared their accuracy to determine the most efficient method and investigate the features for efficiently classifying behaviors and falls in a restroom.


Spectrogra Image-Based Method Using CNN

This method (CNN method) directly uses the generated spectrogram images as the CNN input.Figure 4 shows the process and structure of the network.The spectrogram PNG images of size 168 × 218 generated from the two radars were input into the CNN.CNN has a structure similar to AlexNet [22].However, to avoid overfitting, we used the batch normalization layer instead of the dropout layer.To fuse the two images obtained


Implementation of the Machine Learning-Based Classification Methods

This study implemented three types of classification methods and compared their accur

y to determine the most efficient method and
nvestigate the features for efficiently classifying behaviors and falls in a restroom.


Spectrogram Image-Based Method Using CNN

This method (CNN method) directly uses the generated spectrogram images as the CNN input.Figure 4 shows the process and structure of the network.The spectrogram PNG images of size 168 × 218 generated from the two radars were input into the CNN.CNN has a structure similar to AlexNet [22].However, to avoid overfitting, we used the batch normalization layer instead of the dropout layer.To fuse the two images obtained from the dual radars, we constructed two similar CNNs and combined their outputs using the concatenate layer that was then used in the fully-connected layer to determine the output class.A stochastic gradien descent with a momentum optimization algorithm was used for the network optimization.We trained for 100 epochs with a learning rate of 0.01 and batch size of 8.In addition, to compare the classification accuracy with the single and dual radars, the input from a single spectrogram image was fed into the CNN structure without the concatenate layer.This classification method using spectrogram images and CNN is known as the most efficient method for various applications of radar motion classification [11].Although the efficient

nput data and CNN structure depend on the app
ications, many studies demonstrated the best accuracy with the CNN-based method compared to the use of other classifiers.However, existing studies fail to explain the classification mechanism or to determine the efficient features.Therefore, this study compared the proposed CNN method with other methods to determine the efficient features and reasons for classification.


Spectrogram Envelope-Based Method Using LSTM

The LSTM method uses the velocity time-series spectrogram envelopes, as shown in Figure 5, for classification [16].We extracted three types of envelopes from the spectrograms with the same process as in [41]: the upper envelop

v u (t), lower envelope v l (t)
and powerweighted mean velocity v m (t).These extracted envelopes were then input into the LSTM as outlined in Figure 6.The data length of each envelope was 102 points, and the dimensions of the input data for the single and dual radar fusion were 102 × 3 and 102 × 6, respectively.We empirically optimized the hyperparameters using an Adam optimizer [23].Thus, the number of hidden layers was 100, batch size was 64, learning rate was 0.001, and number of epochs for the training was 300.

This classification method using spectrogram images and CNN is know efficient method for various applications of radar motion classification [11].efficient input data and CNN structure depend on the applications, many st strated the best accuracy with the CNN-based method compared to the use o fiers.However, existing studies fail to explain the classification mechanism o the efficient features.Therefore, this study compared the proposed CNN meth methods to determine the efficient features and reasons for classification.


Spectrogram Envelope-Based Method Using LSTM

The LSTM method uses the velocity time-series spectrogram envelopes Figure 5, for classification [16].We extracted three types of envelopes from grams with the same process as in [41]: the upper envelope vu(t), lower enve power-weighted mean velocity vm(t).These extracted envelopes were then LSTM as outlined in Figure 6.The data length of each envelope was 102 p dimensions of the input data for the single and dual radar fusion were 102 6, respectively.We empirically optimized the hyperparameters using an Ad [23].Thus, the number of hidden layers was 100, batch size was 64, learning r and number of

pochs for the training was 300.
Motion Parameter-Based Methods

These methods use kinematic parameters extracted using the spectrogram envelope for classification and can directly obtain efficient motion-feature parameters for behavio classification, as shown in Figure 7. First, we extracted the three envelopes vu(t), vl(t), and vm(t), as with as the LSTM method.We calculated four representative values of mean maximum, minimum, and standard deviation with respect to time for each envelope Then, we calculated the time derivative of the envelopes to obtain the acceleration and jerk time series.For example, for vm(t), we calculated the acceleration time series am(t) = dvm(t)/dt and jerk time series jm(t) = dam(t)/dt.Empirically designed, moving-average low pass filters with an average length of 0.15 s were used to remove small errors in each tim series.Similar to the velocity time series, we also calculated the four representative value of am(t) and jm(t).The same process was also applied to vu(t) and vl(t).Thus, we extracted 4 (parameters) × 3 (envelopes) × 3 (time series of velocity, acceleration, and jerk) = 36 pa rameters for each radar.For dual radar fusion, 36 × 2 = 72 parameters were obtained a candidate feature parameters for classification.From these parameters, efficient feature parameters for each classifier were automatically selected using the filter method [24].The filter method determines the relevance of each parameter for classification, and the top 20% is selected.We selected the widely used random forest (RF) and support vector ma chine (SVM) as the classifiers in this study.Their hyperparameters were optimized using a grid search, and the Gaussian kernel was used for the SVM.


Motio

se methods use kinematic
parameters extracted using the spectrogram envelopes for classification and can di

-feature parameters for
ehavior classification, as shown in Figure 7. First, we extracted the three envelopes v u (t), v l (t), and v m (t), as with as the LSTM method.We calculated four representative values of mean, maximum, minimum, and standard deviation with respect to time for each envelope.Then, we calculated the time derivative of the envelopes to obtain the acceleration and jerk time series.For example, for v m (t), we calculated the acceleration time series a m (t) = dv m (t)/dt and jerk time series j m (t) = da m (t)/dt.Empirically designed, moving-average low pass filters with an average length of 0.15 s were used to remove small errors in each time series.Similar to the velocity time series, we also calculated the four representative values of a m (t) and j m (t).The same process was also applied to v u (t) and v l (t).Thus, we extracted 4 (parameters) × 3 (envelopes) × 3 (time series of velocity, acceleration, and jerk) = 36 parameters for each radar.For dual radar fusion, 36 × 2 = 72 parameters were obtained as candidate feature parameters for classification.From these parameters, efficient feature parameters for each classifier were automatically selected using the filter method [24].The filter method determines the relevance of each parameter for classification, and the top 20% is selected.We selected the widely used random forest (RF) and support vector machine (SVM) as the classifiers in this study.Their hyperparameters were optimized using a grid search, and the Gaussian kernel was used for the SVM.


Evaluation and Discussion


Main Evaluation Results

We evaluated and compared the classification accuracy of the four classification


Evaluation and Discussion


Main Evaluation Results

We evaluated and compared the classification accuracy of the four classification methods in Section III using hold-out validation.For all of the methods, we also compared the accuracy for three cases: only ceiling radar data, only wall radar data, and fused data from the two radars.The classification model was first trained using 80% of the data for each case and then tested with the remaining 20%.We performed 30 trials of the test processes by randomly varying the training data.

Table 1 summarizes the mean and standard deviation of classification accuracies from the 30 test trials for the four classification methods.The CNN met e CNN and RF methods when dual radar data were used.Therefore,

conclude that the motions in bot
upward and horizontal directions included the differences in the assumed behaviors and falls.

The results from the CNN method based on the convergence curve are shown in Figure 8, and the confusion matrix is further discussed to validate its performance.No overfitting was observed in either test or training processes.The accuracy in the test process converged in less than 50 epochs.Table 2 shows the confusion matrices for the data from the ceiling, wall, and dual radars.The classification accuracies of "(f) pulling up the pants" and "(b) pulling down the pants" are worse for the ceiling and wall radar data, respectively.However, the classification accuracy of (f) improves when the fused data are considered whereas that of (b) is not improved.The classification accuracy of "(h) falling" is 100% in all cases, and is the most important function for the practical use of fall detection.
0/ 0/ 0/ 0/ 0/ 0/ 0/ 1/ 0/ 0/ 0/ 0/ 0/ 0/ 0/ 1/ 0 0 0 0 0 0 0 1
Each cell represents the results for ceiling/wall/dual radars.


Discussion on Efficient Features

This section discusses the efficient features measured with each radar when classifying human behaviors in restrooms.First, we discuss the effectiveness of the data from each radar and the fused data.Tables 3-5 show the confusion matrices for the LSTM, RF, and SVM methods, respectively.As indicated in the confusion matrices of the CNN (Table 2) and LSTM methods, all behaviors and falls are accurately classified using deep learning methods.However, we can see the different classification accuracies for some classes.For example, as indicated in Table 2, the classification accuracy of the classification of behaviors (b) and (g) was worse in the results of the CNN method with the ceiling radar.In contrast, these were accurately classified with the LSTM method with the ceiling radar data as shown in Table 3.These results indicate that some of the behaviors accurately classified by these methods varied because of the differences in the included features in the spectrogram images and envelopes.In addition, for the motion parameter-based methods (the RF and SVM methods), behaviors (b) and (g) were classified with better accuracy, as indicated in Tables 4 and 5, even though the overall accuracies were significantly worse than the CNN method.Because the motion parameters were extracted from the envelopes that were also used in the LSTM method, the efficient features for the classification might be included in the spectrogram envelopes extracted from dual radars.In the following, we discuss the efficient features and factors of our results.Next, we discuss the effectiveness of using du l radar data.Similar to the results for the CNN method, better performance was observed with the wall radar data than with the ceiling radar data.These results indicate that the motion information in the horizontal direction obtained with the wall radar includes significant information for classifying the assumed human behaviors.Another reason is that the wall radar received the data for the whole body, whereas the ceiling radar mainly obtained the motion of the head.The confusion matrices further confirm the differences between the two radars' results for the classified behaviors.In particular, the confusion matrices of the RF and SVM methods in Tables 4 and 5 indicate that the combination of the data from the two radars significantly

proves the classification
ccuracy because the data from the two radars complement each other.Similar accuracy improvements based on the fusion of dual radar data also can be confirmed from the confusion matrices of other methods, further verifying the effectiveness of the dual radar data.

We now discuss the efficient features included in the spectrograms.Because the classification accuracy of the eight behaviors with RF and SVM methods was above 60%, we consider the selected feature parameters for these methods.Table 6 shows the selected features for the RF and SVM methods using the filter method.The acceleration and jerk parameters were selected for all radar cases.These results indicate that the detailed motion parameters of acceleration and jerk were more effective than the velocity parameters obta ned directly from Doppler radar measurements.However, the LSTM and CNN methods performed better than the RF and SVM methods using the motion parameters.


Radar Selected Parameters

Ceiling radar v c-u-std , a c-u-max, a c-u-std, a c-m-max, j c-u-mean, j c-u-mean, j c-u-std, j c-m-max

Wall radar v w-m-max , a w-u-min, a w-u-mean, a w-u-std, a w-m-max, j w-m-max, j w-m-min, j w-l-max

Dual radar v w-u-min , v w-u-mean , v w-u-std, a c-u-max, a w-u-min, a w-u-mean, a w-u-std, a w-m-max, a w-m-min, a w-m-mean, a w-l-min, j c-u-std, j w-m-max,

w-
-max, j c-l-std v, a, and j denote velocity, acceleration, and jerk, respectively.The subscript "X-Y-Z" indicates radar type-envelope operation, X can be

il
ng (c) or wall (w) radars.Y can be u, l, or m, indicating upper, lower, or power-weighted mean velocity, respectively;

he
parameter was extracted from v u (t), v m (t), and v l (t).Z indicates the calculation for the envelopes (std is standard deviation).

We conclude that deep learning can grasp the detailed information in the spectrograms corresponding to higher-order derivative parameters.In ad ition, because the CNN method had better accuracy than the LSTM method, detailed motion information was obtained from the main components extracted as the spectrogram envelopes and from other components corresponding to the micromotion of the various body parts.

The findings regarding the efficient features for the classification of human behaviors and falls in restrooms are summarized as follows:


•

The wall radar that measured motion in the horizontal direction was more effective than the ceiling radar that

asured motion in the vertical directi
n.


•

The accurately classified classes for the two radars were different.Hence, a fusion of the two radars was effective.


•

The proposed method effectively used the detailed higher-order derivative parameters of acceleration and jerk.


•

Detailed motion information was diffused across the whole of the spectrograms and was not limited to the main components, and was efficiently extracted via the CNN.

However, as the limitation of this study, the concrete clarification of the efficient parameters and/or factors for our classification problem was difficult.To achieve this, we have to find the features that indicate clear divergence of the assumed behaviors in restrooms based on other various approaches (e.g., using principal component analysis, application of other classification algorithms and comprehensive comparison with the results of this study, and data acquisition from a larger number of participants).


Comparison with Conventional Studies

In this section, we compare our method with the conventional remote sensor-based monitoring methods for restrooms.Table 7 outlines the comparison of the experimental studies aimed at detecting abnormal, dangerous behaviors in restrooms.The proposed method achieved the best performance in terms of the classification accuracy, number of classified behaviors, number of participants, and detection accuracy of the human fall.Due to privacy issues, the number of studies on restroom monitoring using cameras is quite limited.Reference [6] is one of the few studies that report on camera-based monitoring of restrooms to detect dangerous situations to protect the elderly.However, because sensors without privacy issues are more suitable for restroom monitoring, approaches using infrared-based thermal sensors and radars have been recently studied.However, most studies classify situations as normal or dangerous behaviors [9,10].Although thermal sensors show a sufficiently accurate classification, detailed behaviors were not c

ssified beca
se the sensors cannot detect the motion information directly.By contrast, radar techniques can acquire motion velocity information and classify it into multiple behaviors as carried out in [14,15].However, the accuracy achieved in [15] was insufficient because only the simple feature parameters related to distance and signal information and the RF method were used.Therefore, our previous study [16] proposed the LSTM method that used the rich velocity information obtained via spectrogram envelopes.While both our previous research and the present study can classify the behaviors into eight categories, the proposed method was carried out using CNN and showed higher classification accuracy, including 100% fall detection.In addition, the present study used a relatively large dataset generated from a larger number of participants, and the spectrogram images utilized the rich velocity information included in the Doppler radar signals.


Conclusions

This study used Doppler radar technology to classify the behaviors and falls in a restroom based on machine learning approaches.The CNN, LSTM, SVM, and RF methods were applied and compared to determine the most efficient metho