You are currently viewing a new version of our website. To view the old version click .
Symmetry
  • Article
  • Open Access

5 May 2020

Fall Detection Based on Key Points of Human-Skeleton Using OpenPose

,
,
and
Faculty of Engineering, China University of Geosciences (Wuhan), Wuhan 430074, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Deep Learning-Based Biometric Technologies II

Abstract

According to statistics, falls are the primary cause of injury or death for the elderly over 65 years old. About 30% of the elderly over 65 years old fall every year. Along with the increase in the elderly fall accidents each year, it is urgent to find a fast and effective fall detection method to help the elderly fall.The reason for falling is that the center of gravity of the human body is not stable or symmetry breaking, and the body cannot keep balance. To solve the above problem, in this paper, we propose an approach for reorganization of accidental falls based on the symmetry principle. We extract the skeleton information of the human body by OpenPose and identify the fall through three critical parameters: speed of descent at the center of the hip joint, the human body centerline angle with the ground, and width-to-height ratio of the human body external rectangular. Unlike previous studies that have just investigated falling behavior, we consider the standing up of people after falls. This method has 97% success rate to recognize the fall down behavior.

1. Introduction

The decline of birth rate and the prolongation of life span lead to the aging of the population, which has become a worldwide problem [1]. According to the research [2], the elderly population will increase dramatically in the future, and the proportion of the elderly in the world population will continue to grow, which is expected to reach 28% in 2050. Aging is accompanied by a decline in human function, which increases the risk of falls. According to statistics, falls are the primary cause of injury or death for the elderly over 65 years old. About 30% of the elderly over 65 years old fall every year [3]. In 2015, there were 29 million elderly falls in the United States, of which 37.5% required medical treatment or restricted activities for 1 day or more, and about 33,000 people died [4]. The most common immediate consequences of falls are fractures and other long-term ailments, which can lead to disability and loss of independence and psychological fear of falling again [5]. Falls not only make the elderly suffer moderate or severe injuries, but also bring a mental burden and economic pressure to the elderly and their relatives [6]. Faced with this situation, it is particularly important to quickly and effectively detect the fall of the elderly and provide emergency assistance. In a word, it is extremely important for those who fall and cannot call for help to be found in time and to be treated.
This paper proposes a new detection method for falling. This method processes every frame captured by monitoring, which is to use the OpenPose skeleton extraction algorithm to obtain the skeleton data of people on the screen. In the plane coordinate system, the horizontal and vertical coordinates are used to represent each node. According to the speed of descent at the center of the hip joint, the human body centerline angle with the ground, and the width-to-height ratio of the human body external rectangular, these determine the conditions to identify falling behavior and if the person can stand on his/her own after a fall.
The remainder of this paper is organized as follows: Section 2 reviews the current methods of skeleton extraction and fall detection. Section 3 details the approach (e.g., skeleton extraction and behavior recognition). Section 4 presents the results of an experiment to validate the effectiveness and feasibility of our proposed approach. Section 5 discusses the limitations of the study and potential future work.

3. Methods

Our proposed approach consists of five key steps: (1) OpenPose gets the skeleton information of the human body; (2) Decision condition one (The angle between the centerline of the body and the ground); (3) Decision condition two (The angle between the centerline of the human and the ground); (4) Decision condition three (The width to height ratio of the human body external rectangular); and (5) The procedure of implementation of our proposed approach is as shown in Figure 2.
Figure 2. The workflow of our proposed approach.

3.1. OpenPose Gets the Skeleton Information of the Human Body

The OpenPose human gesture recognition project is an open-source library developed by Carnegie Mellon University (CMU) based on convolutional neural network and supervised learning and based on Caffe (Convolutional Architecture for Fast Feature Embedding) [43]. In 2017, researchers from Carnegie Mellon University released the source code of the human skeleton recognition system of OpenPose to realize real-time tracking of targets under the monitoring of video. It can capture the COCO (Common Objects in Context) human skeleton information in the color video and provide joints information in the scene. OpenPose human key node recognition system can realize real-time detection of multi-person skeleton information. It adopts the top-down human body attitude estimation algorithm to detect the position of key points of the human body and then uses the feature vector affinity parameter to determine the hot spot map of human key nodes. OpenPose can realize human movement, facial expression, finger movement, and other posture estimation. It is suitable for a single person and many people with excellent robustness.
As shown in Figure 3, the screen taken by the surveillance camera uses OpenPose to obtain the information of human key nodes. The surveillance video is divided into a series of frames, each showing the skeleton of a person.
Figure 3. OpenPose gets the skeleton information of the human body.
As shown in Table 2, the position information of each joint point is represented by the horizontal and vertical coordinate values, and the accuracy of each joint point is provided. For some joints, the accuracy of their coordinate position is not very ideal. This problem is mainly due to the defects of OpenPose algorithm itself, but the deviation of some key points has little effect on the recognition of the whole fall action. The specific joint points corresponding to each joint point number in the table are shown in Figure 4.
Table 2. Joint point data obtained through OpenPose.
Figure 4. Human node model diagram.
For the convenience of representation, S = { s 0 , s 1 , , s 13 } represents the joint position set. We define the Joint Coordinates (JC): Define the position of the node j at time t as s j ( t ) = ( x t j , y t j ) , j { 0 , 1 , , 13 } .

3.2. Decision Condition One (the Speed of Descent at the Center of the Hip Joint)

As shown in Figure 5, in the process of sudden fall, the center of gravity of the human body will change in the vertical direction. The central point of the human hip joint can represent the center of gravity of the human body and reflect this feature. By processing the joint point data obtained from the OpenPose, the longitudinal coordinates of the hip joint center point of each frame of the image are obtained. Because it is a very short process from standing posture to falling posture, and the time used is also very short, it is detected once every five adjacent frames, with a time interval of 0.25 s. The coordinates of the hips are s 8 ( t ) = ( x t 8 , y t 8 ) and s 11 ( t ) = ( x t 11 , y t 11 ) . Assume that the y-coordinate of the center of the human hip joint at time t 1 is y t 1 = y t 1 8 + y t 1 11 2 and the y-coordinate at time t 2 is y t 2 = y t 2 8 + y t 2 11 2 . According to these, the descent velocity of the hip joint center can be obtained.
Δ t = t 2 t 1
v = | y t 2 y t 1 | Δ t
where v is greater than or equal to the critical speed v ¯ , the fall feature is considered to be detected. According to the experimental results, this paper chooses 0.009 m/s as the threshold of the falling speed of the hip joint center.
M 1 = { 0 ; 1 ; v < v ¯ v v ¯ }
when v v ¯ M 1 = 1 , it can be considered to satisfy the decision condition one.
Figure 5. The falling process.

3.3. Decision Condition Two (the Angle between the Centerline of the Human and the Ground)

In the process of falling, the most obvious feature of the human body is the body tilt, and tilt degree will continue to increase. In order to reflect the characteristics of the body’s continuous tilt in the process of human fall, a human centerline L is defined in this paper (Let the midpoint of joint s 12 and joint point s 13 be s ¯ , and the connection of midpoint s ¯ and joint s 0 is the centerline L of the human body).
As shown in Figure 6, θ is the angle between the centerline of the human and the ground. Through OpenPose, the data of joint points 0, 10 and 13 are s 0 ( t ) = ( x t 0 , y t 0 ) , s 10 ( t ) = ( x t 10 , y t 10 ) and s 13 ( t ) = ( x t 13 , y t 13 ) respectively. So s ¯ = s 10 + s 13 2 , s ¯ ( t ) = ( x ¯ t , y ¯ t ) . At time t , the angle between the centerline of human body and the ground is θ t = arctan | y t 0 y t ¯ x t 0 x t ¯ | .
M 2 = { 0 ; 1 ; θ θ 0 θ < θ 0 }
when θ < θ 0 ( θ 0 = 45 ° )   M 2 = 1 , it can be considered as satisfying the decision condition two for the occurrence of the fall event.
Figure 6. The angle between the centerline of the body and the ground.

3.4. Decision Condition Three (the Width to Height Ratio of the Human Body External Rectangular)

When a fall is detected, the most intuitive feature is a change in the contours of the body. If we simply compare the length and height of the moving target, both the length and height of the moving target will change due to the distance from or near the camera, while their ratio will not exist. We will detect the falling behavior through the change of the length and height ratio of the target contour rectangle.
As shown in Figure 7, the ratio of width to the height of the outer rectangle of the human body is P = W i d t h / H e i g h t . When the human body falls, the outer rectangle of the target will also change; the most significant manifestation is the change of the length–height ratio.
M 3 = { 1 ; 0 ; P T P < T }
where T is the threshold. According to the actual situation, when a human body normally walks, the width-to-height ratio P is less than 1, while the width-to-height ratio for falling is greater than 1. When P T M 3 = 1 , it can be considered as satisfying decision condition three of the occurrence of the fall event.
Figure 7. Human body external rectangular.

3.5. Determine Whether a Person Can Stand after a Fall

If a person can stand on his own within a period after falling, no alarm is required. Nowadays, most of the fall detection focuses on the analysis of the fall process, rarely considering that people stand on their own within a short time after falling. As shown in Figure 8, standing up after a fall can be regarded as an inverse process of a fall. The only difference is that the whole process is slower than a fall. According to the analysis of this paper, if the ratio of height to width of the external rectangle of the human body is less than 1 and the inclination angle of the central line is greater than 45° in a period of time after a fall, it can be concluded that the person has stood up. The point of judging whether people can stand up on their own after a fall is to reduce unnecessary alarms because sometimes falls do not cause serious injury to the human body.
Figure 8. The process of standing up after a fall.

4. Experimental Results

4.1. Experiment Data and Test

In order to verify the effectiveness of the proposed method, the fall event is tested. Because this experiment has certain risks, the experimental site is chosen in the laboratory. We randomly select 10 experimenters who made falls or non-falls during the test. As shown in Table 3, the actions collected in the experiment are divided into three categories, namely falling actions (fall, stand up after a fall), similar falling actions (squat, stoop), and daily actions (walk, sit down). A total of 100 actions are collected, including 60 falling actions and 40 non-falling actions, each lasting about 5–11 s. From each video, 100–350 valid video frames can be extracted as samples.
Table 3. Collection action classification.
In order to ensure the universality of the system in the test experiment, 10 different types of experimental subjects are randomly selected. The height and weight data of 10 experimenters are shown in Table 4. In the experiment, each person performed 10 actions, including six falls and four non-falls, with a total of 100 action samples.
Table 4. The height and weight data of 10 experimenters.
In the test of falling, there are four possible cases: In the first case, a fall event occurs and the algorithm correctly detects the fall; in the second case, the fall did not happen but the algorithm misidentified it as a fall; in the third case, a fall occurs but the algorithm judges that it did not fall; in the fourth case, the fall did not happen and the algorithm did not detect the fall. The above four cases are defined as TP, FP, TN, and FN respectively.
  • True positive (TP): a fall occurs, the device detects it.
  • False positive (FP): the device announces a fall, but it did not occur.
  • True negative (TN): a normal (no fall) movement is performed, the device does not declare a fall.
  • False negative (FN): a fall occurs but the device does not detect it.
To evaluate the response to these four situations, two criteria are proposed:
Sensitivity is the capacity to detect a fall:
S e n s i t i v i t y = T P T P + F N
Specificity is the capacity to detect only a fall:
S p e c i f i c i t y = T N T N + F P
Accuracy is the capacity to correctly detect fall and no fall:
A c c u r a c y = T P + T N T P + F P + T N + F N

4.2. Analysis of the Experimental Results

Before the final experimental judgment, we analyze the feasibility of the three conditions and the final conditions of standing up after falling.
When detecting the descending speed of the hip joint center point, the speed of change of each action is shown in Figure 9 below. We can see that the speed of fall and squat can exceed the critical value (0.09 m/s). In other words, only falling and squatting down meet the conditions by decision condition one (the speed of descent at the center of the hip joint).
Figure 9. Speed change of each action.
As shown in Figure 10: When walking and sitting down, the inclination angle of the human body fluctuates less; when squatting down, the inclination angle of the human body fluctuates, but the whole body is relatively stable; only when stooping and falling, the inclination angle of the human body fluctuates greatly, and the inclination angle is less than the critical angle 45°. We can exclude walking, sitting down and squatting from the decision condition two (the angle between the centerline of the body and the ground).
Figure 10. The change of inclination angle of each action.
As shown in Figure 11, in all the actions, only the width–height ratio of the external rectangle of the human body in the falling action is greater than 1. By decision condition three (the width to height ratio of the human body external rectangular), we can find that only the falling action meets the requirement.
Figure 11. The change of the aspect ratio of the outer rectangle for each action.
As shown in Figure 12, it shows that the common feature of falling action is that the inclination angle of the human body must fall below 45° and the aspect ratio of the external rectangle of the human body will be greater than 1 at a certain time. For the action of standing up after a fall, the fall process can be judged according to the judgment conditions of the fall. In the subsequent rise process, it can be found that the inclination angle of the human body will gradually increase to above 45°, and the width–height ratio of the external rectangle of the human body is also less than 1.
Figure 12. The characteristic of standing up after a fall.
Through the analysis of a total of 100 experimental actions, the specific situation is shown in the Table 5 below. In the table, ✓ indicates that the action is correctly identified, ✕ indicates that the action is incorrectly identified. It can be seen that No.1 and No.3 experiments’ stooping actions in the non-falling actions are wrongly identified as falling, and only one time in the falling actions is wrongly identified as non-falling.
Table 5. Test results of 100 actions.
According to the calculation formula proposed in Section 4.2, the sensitivity, specificity and accuracy are 98.3%, 95% and 97% in Table 6. There are the following reasons for wrong discrimination: (a) The lack of joint points in skeleton estimation results in incomplete data, which affects the final recognition. (b) The three thresholds selected in the experiment are not necessarily optimal. (c) During the experiment, due to the self-protection consciousness of the experimenter, there are still differences between the recorded falls and the real falls.
Table 6. The experimental calculation results.

5. Conclusions and Future Work

Conclusions

At present, because there are no suitable public datasets of falls, we cannot directly compare our results with previous results in detail. As shown in Table 7, we list the algorithms, classifications, features, and final accuracy of other fall detection technologies. Droghini et al. [30] detected falls by capturing sound waves transmitted on the floor. The accuracy of the experimental results is high, but the experiment uses a puppet to imitate falls, which is still very different from the real human fall. In addition, its detection method is extremely susceptible to interference from external noise, and the available environment is limited. Shahzad et al. [25] make good use of the sensors in smartphones and improves the power consumption of the algorithm, but the phone can always also cause false positives and requires the user to wear the phone. Kepski et al. [44] proposed a fall recognition system based on microwave doppler sensor, which can not only distinguish fall and fall-like movements accurately, but also does not infringe on the human body. The only disadvantage of this method is that the detection range is too small. Quadros et al. [40], the threshold method and machine learning are used to fuse multiple signals to identify falls, which undoubtedly improves the reliability of the recognition results. However, the user needs to wear the device for a long time, and the endurance of the device should also be considered. The method of OpenPose [20,21] can be used to identify the images captured by the camera, which is convenient and fast, and has a broad prospect in video-based methods. Compared with other methods, vision-based is more convenient. OpenPose gets the skeleton information of the human body, which is convenient and accurate. To some degree, our method not only has high accuracy but also is simple and low cost.
Table 7. Comparison of our proposed algorithm with other fall detection approaches.
According to statistics, the elderly population will continue to increase in the future, and falling is one of the major public health problems in an aging society. It is necessary to find out the characteristics of the fall movement for fall detection. In this paper, we introduce a novel method for this problem. Using OpenPose algorithm to process video captured by surveillance, the data of human joint points are obtained. Then, the falling motion is recognized by setting three conditions: the speed of descent at the center of the hip joint, the angle between the centerline of the human body and the ground, and the width-to-height ratio of the human body external rectangular. Based on the recognition of falls, considering the situation of people standing up after falls, the process of standing up after falls is regarded as an inverse process of falling. The method is verified by experiments and achieved the ideal result. The sensitivity is 98.3%, the specificity is 95%, and the accuracy is 97%.
With the popularity of the camera and the clearer quality of the captured image, the vison-based fall detection method has a broader space. In the future, we can carry out the following work:
(a)
The environment of daily life is complex, there may be situations in which peoples’ actions cannot be completely captured by surveillance. In the future, we can study the estimation and prediction of peoples’ behavior and actions in the presence of partial occlusion.
(b)
In this paper, the action is identified from the side, and the other directions are not considered. Future research can start with multiple directions recognition and then comprehensively judge whether to fall.
(c)
Building a fall alarm system for people. In the event of a fall, the scene, time, location, and other detailed information shall be timely notified to the rescuer, to speed up the response speed of emergency rescue.

Author Contributions

W.C. contributed to the conception of the study. Z.J. performed the experiment; W.C., Z.J. performed the data analyses and wrote the manuscript; H.G., X.N. helped perform the analysis with constructive discussions. All authors read and approved the manuscript.

Funding

This research was funded by the Open Fund of Teaching Laboratory of China University of Geosciences (Wuhan) grant number SKJ2019095.

Acknowledgments

Thanks to everyone who helped with the experiment. We are also very thankful for the editors and anonymous reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. WHO. Number of People over 60 Years Set to Double by 2050; Major Societal Changes Required. Available online: https://www.who.int/mediacentre/news/releases/2015/older-persons-day/en/ (accessed on 17 March 2020).
  2. Lapierre, N.; Neubauer, N.; Miguel-Cruz, A.; Rincon, A.R.; Liu, L.; Rousseau, J. The state of knowledge on technologies and their use for fall detection: A scoping review. Int. J. Med. Inform. 2018, 111, 58–71. [Google Scholar] [CrossRef] [PubMed]
  3. Christiansen, T.L.; Lipsitz, S.; Scanlan, M.; Yu, S.P.; Lindros, M.E.; Leung, W.Y.; Adelman, J.; Bates, D.W.; Dykes, P.C. Patient activation related to fall prevention: A multisite study. Jt. Comm. J. Qual. Patient Saf. 2020, 46, 129–135. [Google Scholar] [CrossRef] [PubMed]
  4. Grossman, D.C.; Curry, S.J.; Owens, D.K.; Barry, M.J.; Caughey, A.B.; Davidson, K.W.; Doubeni, C.A.; Epling, J.W.; Kemper, A.R.; Krist, A.H. Interventions to prevent falls in community-dwelling older adults: US Preventive Services Task Force recommendation statement. JAMA 2018, 319, 1696–1704. [Google Scholar] [PubMed]
  5. Gates, S.; Fisher, J.; Cooke, M.; Carter, Y.; Lamb, S. Multifactorial assessment and targeted intervention for preventing falls and injuries among older people in community and emergency care settings: Systematic review and meta-analysis. BMJ 2008, 336, 130–133. [Google Scholar] [CrossRef] [PubMed]
  6. Faes, M.C.; Reelick, M.F.; Joosten-Weyn Banningh, L.W.; Gier, M.D.; Esselink, R.A.; Olde Rikkert, M.G. Qualitative study on the impact of falling in frail older persons and family caregivers: Foundations for an intervention to prevent falls. Aging Ment. Health 2010, 14, 834–842. [Google Scholar] [CrossRef] [PubMed]
  7. Johansson, G. Visual perception of biological motion and a model for its analysis. Percept. Psychophys. 1973, 14, 201–211. [Google Scholar] [CrossRef]
  8. Chen, T.; Li, Q.; Fu, P.; Yang, J.; Xu, C.; Cong, G.; Li, G. Public opinion polarization by individual revenue from the social preference theory. Int. J. Environ. Res. Public Health 2020, 17, 946. [Google Scholar] [CrossRef]
  9. Chen, T.; Li, Q.; Yang, J.; Cong, G.; Li, G. Modeling of the public opinion polarization process with the considerations of individual heterogeneity and dynamic conformity. Mathematics 2019, 7, 917. [Google Scholar] [CrossRef]
  10. Chen, T.; Wu, S.; Yang, J.; Cong, G. Risk Propagation Model and Its Simulation of Emergency Logistics Network Based on Material Reliability. Int. J. Environ. Res. Public Health 2019, 16, 4677. [Google Scholar] [CrossRef]
  11. Chen, T.; Shi, J.; Yang, J.; Li, G. Enhancing network cluster synchronization capability based on artificial immune algorithm. Hum. Cent. Comput. Inf. Sci. 2019, 9, 3. [Google Scholar] [CrossRef]
  12. Jiang, C.; Chen, T.; Li, R.; Li, L.; Li, G.; Xu, C.; Li, S. Construction of extended ant colony labor division model for traffic signal timing and its application in mixed traffic flow model of single intersection. Concurr. Comput. Pract. Exp. 2020, 32, e5592. [Google Scholar] [CrossRef]
  13. Chen, T.; Wu, S.; Yang, J.; Cong, G.; Li, G. Modeling of emergency supply scheduling problem based on reliability and its solution algorithm under variable road network after sudden-onset disasters. Complexity 2020, 2020. [Google Scholar] [CrossRef]
  14. Ye, Q.; Dong, J.; Zhang, Y. 3D Human behavior recognition based on binocular vision and face–hand feature. Optik 2015, 126, 4712–4717. [Google Scholar] [CrossRef]
  15. Alagoz, B.B. Obtaining depth maps from color images by region based stereo matching algorithms. arXiv 2008, arXiv:0812.1340. [Google Scholar]
  16. Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef]
  17. Zhang, Z. Microsoft kinect sensor and its effect. IEEE Multimed. 2012, 19, 4–10. [Google Scholar] [CrossRef]
  18. Newell, A.; Yang, K.; Deng, J. Stacked hourglass networks for human pose estimation. In Proceedings of the Computer Vision—14th European Conference, Amsterdam, The Netherlands, 18 October 2016; pp. 483–499. [Google Scholar]
  19. Insafutdinov, E.; Pishchulin, L.; Andres, B.; Andriluka, M.; Schiele, B. Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 34–50. [Google Scholar]
  20. Jeong, S.; Kang, S.; Chun, I. Human-skeleton based Fall-Detection Method using LSTM for Manufacturing Industries. In Proceedings of the 2019 34th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC), Jeju Shinhwa World, Korea, 23–26 June 2019; pp. 1–4. [Google Scholar]
  21. Xu, Q.; Huang, G.; Yu, M.; Guo, Y. Fall prediction based on key points of human bones. Phys. A Stat. Mech. Its Appl. 2020, 540, 123205. [Google Scholar] [CrossRef]
  22. Koshmak, G.; Loutfi, A.; Linden, M. Challenges and issues in multisensor fusion approach for fall detection. J. Sens. 2016, 2016. [Google Scholar] [CrossRef]
  23. Mubashir, M.; Shao, L.; Seed, L. A survey on fall detection: Principles and approaches. Neurocomputing 2013, 100, 144–152. [Google Scholar] [CrossRef]
  24. Ren, L.; Peng, Y. Research of fall detection and fall prevention technologies: A systematic review. IEEE Access 2019, 7, 77702–77722. [Google Scholar] [CrossRef]
  25. Shahzad, A.; Kim, K. FallDroid: An automated smart-phone-based fall detection system using multiple kernel learning. IEEE Trans. Ind. Inform. 2018, 15, 35–44. [Google Scholar] [CrossRef]
  26. Fino, P.C.; Frames, C.W.; Lockhart, T.E. Classifying step and spin turns using wireless gyroscopes and implications for fall risk assessments. Sensors 2015, 15, 10676–10685. [Google Scholar] [CrossRef] [PubMed]
  27. Light, J.; Cha, S.; Chowdhury, M. Optimizing pressure sensor array data for a smart-shoe fall monitoring system. In Proceedings of the 2015 IEEE SENSORS, Busan, Korea, 1–4 November 2015; pp. 1–4. [Google Scholar]
  28. Han, H.; Ma, X.; Oyama, K. Flexible detection of fall events using bidirectional EMG sensor. Stud. Health Technol. Inform. 2017, 245, 1225. [Google Scholar] [PubMed]
  29. Sun, J.; Wang, Z.; Pei, B.; Tao, S.; Chen, L. Fall detection using plantar inclinometer sensor. In Proceedings of the 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), Beijing, China, 10–14 August 2015; pp. 1692–1697. [Google Scholar]
  30. Droghini, D.; Principi, E.; Squartini, S.; Olivetti, P.; Piazza, F. Human fall detection by using an innovative floor acoustic sensor. In Multidisciplinary Approaches to Neural Computing; Springer: Berlin/Heidelberg, Germany, 2018; pp. 97–107. [Google Scholar]
  31. Chaccour, K.; Darazi, R.; el Hassans, A.H.; Andres, E. Smart carpet using differential piezoresistive pressure sensors for elderly fall detection. In Proceedings of the 2015 IEEE 11th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Abu Dhabi, United Arab Emirates, 19–21 October 2015; pp. 225–229. [Google Scholar]
  32. Fan, X.; Zhang, H.; Leung, C.; Shen, Z. Robust unobtrusive fall detection using infrared array sensors. In Proceedings of the 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Daegu, Korea, 16–18 November 2017; pp. 194–199. [Google Scholar]
  33. Fan, K.; Wang, P.; Hu, Y.; Dou, B. Fall detection via human posture representation and support vector machine. Int. J. Distrib. Sens. Netw. 2017, 13, 1550147717707418. [Google Scholar] [CrossRef]
  34. Liu, Y.; Wang, N.; Lv, C.; Cui, J. Human body fall detection based on the Kinect sensor. In Proceedings of the 2015 8th International Congress on Image and Signal Processing (CISP), Shenyang, China, 14–16 October 2015; pp. 367–371. [Google Scholar]
  35. Kong, X.; Meng, L.; Tomiyama, H. Fall detection for elderly persons using a depth camera. In Proceedings of the 2017 International Conference on Advanced Mechatronic Systems (ICAMechS), Xiamen, China, 6–9 December 2017; pp. 269–273. [Google Scholar]
  36. Rafferty, J.; Synnott, J.; Nugent, C.; Morrison, G.; Tamburini, E. Fall detection through thermal vision sensing. In Ubiquitous Computing and Ambient Intelligence; Springer: Berlin/Heidelberg, Germany, 2016; pp. 84–90. [Google Scholar]
  37. Tang, Y.; Peng, Z.; Ran, L.; Li, C. iPrevent: A novel wearable radio frequency range detector for fall prevention. In Proceedings of the 2016 IEEE International Symposium on Radio-Frequency Integration Technology (RFIT), Taipei, Taiwan, 24–26 August 2016; pp. 1–3. [Google Scholar]
  38. Wang, H.; Zhang, D.; Wang, Y.; Ma, J.; Wang, Y.; Li, S. RT-Fall: A real-time and contactless fall detection system with commodity WiFi devices. IEEE Trans. Mob. Comput. 2016, 16, 511–526. [Google Scholar] [CrossRef]
  39. Lu, C.; Huang, J.; Lan, Z.; Wang, Q. Bed exiting monitoring system with fall detection for the elderly living alone. In Proceedings of the 2016 International Conference on Advanced Robotics and Mechatronics (ICARM), Macau, China, 18–20 August 2016; pp. 59–64. [Google Scholar]
  40. De Quadros, T.; Lazzaretti, A.E.; Schneider, F.K. A movement decomposition and machine learning-based fall detection system using wrist wearable device. IEEE Sens. J. 2018, 18, 5082–5089. [Google Scholar] [CrossRef]
  41. Kepski, M.; Kwolek, B. Event-driven system for fall detection using body-worn accelerometer and depth sensor. IET Comput. Vis. 2017, 12, 48–58. [Google Scholar] [CrossRef]
  42. Ramezani, R.; Xiao, Y.; Naeim, A. Sensing-Fi: Wi-Fi CSI and accelerometer fusion system for fall detection. In Proceedings of the 2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Las Vegas, NV, USA, 4–7 March 2018; pp. 402–405. [Google Scholar]
  43. Cao, Z.; Simon, T.; Wei, S.-E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
  44. Shiba, K.; Kaburagi, T.; Kurihara, Y. Fall detection utilizing frequency distribution trajectory by microwave Doppler sensor. IEEE Sens. J. 2017, 17, 7561–7568. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.