Capturing Eating Behavior from Video Analysis: A Systematic Review
Abstract
:1. Introduction
2. Materials and Methods
2.1. Search Strategy
2.2. Inclusion and Exclusion Criteria
2.3. Article Selection
2.4. Summary Characteristics and Data Extraction
3. Results
3.1. Facial Landmarks
3.2. Deep Neural Networks
3.3. Optical Flow
3.4. Active Appearance Model
3.5. Video Fluoroscopy
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
References
- LaCaille, L. Eating Behavior. In Encyclopedia of Behavioral Medicine; Gellman, M.D., Turner, J.R., Eds.; Springer: New York, NY, 2013; pp. 641–642. ISBN 978-1-4419-1005-9. [Google Scholar]
- Viskaal-van Dongen, M.; Kok, F.J.; de Graaf, C. Eating Rate of Commonly Consumed Foods Promotes Food and Energy Intake. Appetite 2011, 56, 25–31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Robinson, E.; Almiron-Roig, E.; Rutters, F.; de Graaf, C.; Forde, C.G.; Tudur Smith, C.; Nolan, S.J.; Jebb, S.A. A Systematic Review and Meta-Analysis Examining the Effect of Eating Rate on Energy Intake and Hunger. Am. J. Clin. Nutr. 2014, 100, 123–151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Fogel, A.; McCrickerd, K.; Aris, I.M.; Goh, A.T.; Chong, Y.-S.; Tan, K.H.; Yap, F.; Shek, L.P.; Meaney, M.J.; Broekman, B.F.P.; et al. Eating Behaviors Moderate the Associations between Risk Factors in the First 1000 Days and Adiposity Outcomes at 6 Years of Age. Am. J. Clin. Nutr. 2020, 111, 997–1006. [Google Scholar] [CrossRef] [PubMed]
- van den Boer, J.H.W.; Kranendonk, J.; van de Wiel, A.; Feskens, E.J.M.; Geelen, A.; Mars, M. Self-Reported Eating Rate Is Associated with Weight Status in a Dutch Population: A Validation Study and a Cross-Sectional Study. Int. J. Behav. Nutr. Phys. Act. 2017, 14, 121. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Otsuka, R.; Tamakoshi, K.; Yatsuya, H.; Murata, C.; Sekiya, A.; Wada, K.; Zhang, H.M.; Matsushita, K.; Sugiura, K.; Takefuji, S.; et al. Eating Fast Leads to Obesity: Findings Based on Self-Administered Questionnaires among Middle-Aged Japanese Men and Women. J. Epidemiol. 2006, 16, 117–124. [Google Scholar] [CrossRef] [Green Version]
- Ohkuma, T.; Hirakawa, Y.; Nakamura, U.; Kiyohara, Y.; Kitazono, T.; Ninomiya, T. Association between Eating Rate and Obesity: A Systematic Review and Meta-Analysis. Int. J. Obes. 2015, 39, 1589–1596. [Google Scholar] [CrossRef]
- Sakurai, M.; Nakamura, K.; Miura, K.; Takamura, T.; Yoshita, K.; Nagasawa, S.; Morikawa, Y.; Ishizaki, M.; Kido, T.; Naruse, Y.; et al. Self-Reported Speed of Eating and 7-Year Risk of Type 2 Diabetes Mellitus in Middle-Aged Japanese Men. Metabolism 2012, 61, 1566–1571. [Google Scholar] [CrossRef] [Green Version]
- Zhu, B.; Haruyama, Y.; Muto, T.; Yamazaki, T. Association Between Eating Speed and Metabolic Syndrome in a Three-Year Population-Based Cohort Study. J. Epidemiol. 2015, 25, 332–336. [Google Scholar] [CrossRef] [Green Version]
- Gahagan, S. The Development of Eating Behavior—Biology and Context. J. Dev. Behav. Pediatr. 2012, 33, 261–271. [Google Scholar] [CrossRef] [Green Version]
- Forde, C.G.; de Graaf, K. Influence of Sensory Properties in Moderating Eating Behaviors and Food Intake. Front. Nutr. 2022, 9, 841444. [Google Scholar] [CrossRef]
- Forde, C.G.; Bolhuis, D. Interrelations Between Food Form, Texture, and Matrix Influence Energy Intake and Metabolic Responses. Curr. Nutr. Rep. 2022, 11, 124–132. [Google Scholar] [CrossRef]
- Bolhuis, D.P.; Forde, C.G. Application of Food Texture to Moderate Oral Processing Behaviors and Energy Intake. Trends Food Sci. Technol. 2020, 106, 445–456. [Google Scholar] [CrossRef]
- Abidin, N.Z.; Mamat, M.; Dangerfield, B.; Zulkepli, J.H.; Baten, M.A.; Wibowo, A. Combating Obesity through Healthy Eating Behavior: A Call for System Dynamics Optimization. PLoS ONE 2014, 9, e114135. [Google Scholar] [CrossRef] [Green Version]
- Shavit, Y.; Roth, Y.; Teodorescu, K. Promoting Healthy Eating Behaviors by Incentivizing Exploration of Healthy Alternatives. Front. Nutr. 2021, 8, 658793. [Google Scholar] [CrossRef] [PubMed]
- Pesch, M.H.; Lumeng, J.C. Methodological Considerations for Observational Coding of Eating and Feeding Behaviors in Children and Their Families. Int. J. Behav. Nutr. Phys. Act. 2017, 14, 170. [Google Scholar] [CrossRef] [Green Version]
- Doulah, A.; Farooq, M.; Yang, X.; Parton, J.; McCrory, M.A.; Higgins, J.A.; Sazonov, E. Meal Microstructure Characterization from Sensor-Based Food Intake Detection. Front. Nutr. 2017, 4, 31. [Google Scholar] [CrossRef] [Green Version]
- Resources—The Observer XT|Noldus. Available online: https://www.noldus.com/observer-xt/resources (accessed on 11 January 2022).
- ELAN, Version 6.2; Max Planck Institute for Psycholinguistics, The Language Archive: Nijmegen, The Netherlands, 2021.
- Fouse, A.; Weibel, N.; Hutchins, E.; Hollan, J. ChronoViz: A System for Supporting Navigation of Time-Coded Data; Association for Computing Machinery: New York, NY, USA, 2011; p. 304. [Google Scholar]
- Krop, E.M.; Hetherington, M.M.; Nekitsing, C.; Miquel, S.; Postelnicu, L.; Sarkar, A. Influence of Oral Processing on Appetite and Food Intake—A Systematic Review and Meta-Analysis. Appetite 2018, 125, 253–269. [Google Scholar] [CrossRef]
- Ioakimidis, I.; Zandian, M.; Eriksson-Marklund, L.; Bergh, C.; Grigoriadis, A.; Sodersten, P. Description of Chewing and Food Intake over the Course of a Meal. Physiol. Behav. 2011, 104, 761–769. [Google Scholar] [CrossRef]
- Hermsen, S.; Frost, J.H.; Robinson, E.; Higgs, S.; Mars, M.; Hermans, R.C.J. Evaluation of a Smart Fork to Decelerate Eating Rate. J. Acad. Nutr. Diet. 2016, 116, 1066–1068. [Google Scholar] [CrossRef] [Green Version]
- Kyritsis, K.; Tatli, C.L.; Diou, C.; Delopoulos, A. Automated Analysis of in Meal Eating Behavior Using a Commercial Wristband IMU Sensor. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Korea, 11–15 July 2017; pp. 2843–2846. [Google Scholar] [CrossRef]
- Bi, S.; Caine, K.; Halter, R.; Sorber, J.; Kotz, D.; Wang, T.; Tobias, N.; Nordrum, J.; Wang, S.; Halvorsen, G.; et al. Auracle: Detecting Eating Episodes with an Ear-Mounted Sensor. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–27. [Google Scholar] [CrossRef]
- Hermsen, S.; Mars, M.; Higgs, S.; Frost, J.H.; Hermans, R.C.J. Effects of Eating with an Augmented Fork with Vibrotactile Feedback on Eating Rate and Body Weight: A Randomized Controlled Trial. Int. J. Behav. Nutr. Phys. Act. 2019, 16, 90. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Smart-U: Smart Utensils Know What You Eat|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/8486266 (accessed on 11 January 2022).
- Mertes, G.; Ding, L.; Chen, W.; Hallez, H.; Jia, J.; Vanrumste, B. Quantifying Eating Behavior With a Smart Plate in Patients with Arm Impairment After Stroke. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019. [Google Scholar]
- Lasschuijt, M.P.; Brouwer-Brolsma, E.; Mars, M.; Siebelink, E.; Feskens, E.; de Graaf, K.; Camps, G. Concept Development and Use of an Automated Food Intake and Eating Behavior Assessment Method. J. Vis. Exp. 2021, e62144. [Google Scholar] [CrossRef] [PubMed]
- Gemming, L.; Doherty, A.; Utter, J.; Shields, E.; Ni Mhurchu, C. The Use of a Wearable Camera to Capture and Categorise the Environmental and Social Context of Self-Identified Eating Episodes. Appetite 2015, 92, 118–125. [Google Scholar] [CrossRef] [PubMed]
- van Bommel, R.; Stieger, M.; Visalli, M.; de Wijk, R.; Jager, G. Does the Face Show What the Mind Tells? A Comparison between Dynamic Emotions Obtained from Facial Expressions and Temporal Dominance of Emotions (TDE). Food Qual. Prefer. 2020, 85, 103976. [Google Scholar] [CrossRef]
- Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. PRISMA Group Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [Green Version]
- Cadavid, S.; Abdel-Mottaleb, M.; Helal, A. Exploiting Visual Quasi-Periodicity for Real-Time Chewing Event Detection Using Active Appearance Models and Support Vector Machines. Pers. Ubiquit. Comput. 2012, 16, 729–739. [Google Scholar] [CrossRef]
- Okamoto, K.; Yanai, K. Real-Time Eating Action Recognition System on a Smartphone. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Chengdu, China, 14–18 July 2014; pp. 1–6. [Google Scholar]
- Hantke, S.; Schmitt, M.; Tzirakis, P.; Schuller, B. EAT-: The ICMI 2018 Eating Analysis and Tracking Challenge. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA, 16–20 October 2018; pp. 559–563. [Google Scholar]
- Haider, F.; Pollak, S.; Zarogianni, E.; Luz, S. SAAMEAT: Active Feature Transformation and Selection Methods for the Recognition of User Eating Conditions. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA, 16–20 October 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 564–568. [Google Scholar]
- Konstantinidis, D.; Dimitropoulos, K.; Ioakimidis, I.; Langlet, B.; Daras, P. A Deep Network for Automatic Video-Based Food Bite Detection. In Computer Vision Systems, Proceedings of the 12th International Conference, ICVS 2019, Thessaloniki, Greece, 23–25 September 2019; Tzovaras, D., Giakoumis, D., Vincze, M., Argyros, A., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 586–595. [Google Scholar]
- Qiu, J.; Lo, F.P.-W.; Lo, B. Assessing Individual Dietary Intake in Food Sharing Scenarios with a 360 Camera and Deep Learning. In Proceedings of the 2019 IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Chicago, IL, USA, 19–22 May 2019; pp. 1–4. [Google Scholar]
- Hossain, D.; Ghosh, T.; Sazonov, E. Automatic Count of Bites and Chews From Videos of Eating Episodes. IEEE Access 2020, 8, 101934–101945. [Google Scholar] [CrossRef]
- Rouast, P.V.; Adam, M.T.P. Learning Deep Representations for Video-Based Intake Gesture Detection. IEEE J. Biomed. Health Inform. 2020, 24, 1727–1737. [Google Scholar] [CrossRef] [Green Version]
- Konstantinidis, D.; Dimitropoulos, K.; Langlet, B.; Daras, P.; Ioakimidis, I. Validation of a Deep Learning System for the Full Automation of Bite and Meal Duration Analysis of Experimental Meal Videos. Nutrients 2020, 12, 209. [Google Scholar] [CrossRef]
- Nour, M. Real-Time Detection and Motivation of Eating Activity in Elderly People with Dementia Using Pose Estimation with TensorFlow and OpenCV. Adv. Soc. Sci. Res. J. 2021, 8, 28–34. [Google Scholar] [CrossRef]
- Park, D.; Hoshi, Y.; Mahajan, H.P.; Kim, H.K.; Erickson, Z.; Rogers, W.A.; Kemp, C.C. Active Robot-Assisted Feeding with a General-Purpose Mobile Manipulator: Design, Evaluation, and Lessons Learned. arXiv 2019, arXiv:1904.03568. [Google Scholar] [CrossRef]
- Alshboul, S.; Fraiwan, M. Determination of Chewing Count from Video Recordings Using Discrete Wavelet Decomposition and Low Pass Filtration. Sensor 2021, 21, 6806. [Google Scholar] [CrossRef] [PubMed]
- Kato, Y.; Kikutani, T.; Tohara, T.; Takahashi, N.; Tamura, F. Masticatory Movements and Food Textures in Older Patients with Eating Difficulties. Gerodontology 2022, 39, 90–97. [Google Scholar] [CrossRef]
- Eyben, F.; Wöllmer, M.; Schuller, B. Opensmile: The Munich Versatile and Fast Open-Source Audio Feature Extractor. In Proceedings of the 18th ACM international conference on Multimedia, Firenze, Italy, 25–29 October 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 1459–1462. [Google Scholar]
- Baltrušaitis, T.; Robinson, P.; Morency, L.-P. OpenFace: An Open Source Facial Behavior Analysis Toolkit. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–10. [Google Scholar]
- Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.-E.; Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. arXiv 2019, arXiv:1812.08008. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Viola, P.; Jones, M. Rapid Object Detection Using a Boosted Cascade of Simple Features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; Volume 1, p. 511, ISBN 978-0-7695-1272-3. [Google Scholar]
- Kazemi, V.; Sullivan, J. One Millisecond Face Alignment with an Ensemble of Regression Trees. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1867–1874. [Google Scholar]
- King, D.E. Dlib-Ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
- Bradski, G. OpenCV. Dr. Dobb’s J. Softw. Tools 2000, 120, 122–125. [Google Scholar]
- Deep Learning|Nature. Available online: https://www.nature.com/articles/nature14539 (accessed on 25 October 2022).
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
- Shrestha, A.; Mahmood, A. Review of Deep Learning Algorithms and Architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks 2016. arXiv 2015, arXiv:1506.01497. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Beauchemin, S.S.; Barron, J.L. The Computation of Optical Flow. ACM Comput. Surv. 1995, 27, 433–466. [Google Scholar] [CrossRef]
- Matsuo, K.; Palmer, J.B. Video Fluoroscopic Techniques for the Study of Oral Food Processing. Curr. Opin. Food Sci. 2016, 9, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, X.; Liu, J.; Baron, J.; Luu, K.; Patterson, E. Evaluating Effects of Focal Length and Viewing Angle in a Comparison of Recent Face Landmark and Alignment Methods. EURASIP J. Image Video Process. 2021, 2021, 9. [Google Scholar] [CrossRef]
- Noldus. FaceReader Methodology Note What Is Facereader? Noldus: Wageningen, The Netherlands, 2016. [Google Scholar]
- Lewinski, P.; den Uyl, T.M.; Butler, C. Automated Facial Coding: Validation of Basic Emotions and FACS AUs in FaceReader. J. Neurosci. Psychol. Econ. 2014, 7, 227–236. [Google Scholar] [CrossRef] [Green Version]
- Álvarez-Pato, V.M.; Sánchez, C.N.; Domínguez-Soberanes, J.; Méndoza-Pérez, D.E.; Velázquez, R. A Multisensor Data Fusion Approach for Predicting Consumer Acceptance of Food Products. Foods 2020, 9, 774. [Google Scholar] [CrossRef] [PubMed]
- Farooq, M.; Doulah, A.; Parton, J.; McCrory, M.A.; Higgins, J.A.; Sazonov, E. Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment. Nutrients 2019, 11, 609. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Thomas, J.M.; Dourish, C.T.; Higgs, S. Effects of Awareness That Food Intake Is Being Measured by a Universal Eating Monitor on the Consumption of a Pasta Lunch and a Cookie Snack in Healthy Female Volunteers. Appetite 2015, 92, 247–251. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Suwalska, J.; Bogdański, P. Social Modeling and Eating Behavior—A Narrative Review. Nutrients 2021, 13, 1209. [Google Scholar] [CrossRef]
- Theodoridis, T.; Solachidis, V.; Dimitropoulos, K.; Gymnopoulos, L.; Daras, P. A Survey on AI Nutrition Recommender Systems. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Island of Rhodes, Greece, 5–7 June 2019; pp. 540–546. [Google Scholar]
First Author | Year | Journal | Methods | Study Setting | Device | Outcomes | Validity | Precision |
---|---|---|---|---|---|---|---|---|
Cadavid [33] | 2012 | Pers. Ubiquit. Comput. | Active Appearance Model (AAM) for face tracking, and spectral analysis on the temporal window of the model parameter values; binary support vector machine classifier for chewing events | Laboratory | Not reported; 37 videos at 24 fps; frame resolution: 640 × 480 | Chewing detection | Manual annotation for chewing events | 93% after cross-validation |
Okamoto [34] | 2014 | IEEE International Conference on Multimedia and Expo Workshops | Mouth detector limited to the lower part of detected face; Chopstick detection using OpenCV Hough transform for straight lines | Laboratory | Smartphone Google Nexus 5 (2.3 GHz Quad Core, Android 4.4), inner camera; frontal view | Food intake estimation | N.A. | N.A. |
Hantke [35] | 2018 | Proceedings of the 20th ACM International Conference on Multimodal Interaction | OpenFace facial landmarks extraction for tracking the mouth | Office room | Logitech HD Pro Webcam C920; 30 fps; resolution: 1280 × 720; frontal view | Food liking | Leave-One-Out Cross-Validation and SVM | Likability 0.583 |
Haider [36] | 2018 | Proceedings of the 20th ACM International Conference on Multimodal Interaction | OpenSMILE for facial landmarks extraction, coupled with OpenSMILE audio-feature extraction | Office room | Logitech HD Pro Webcam C920; 30 fps; resolution: 1280 × 720; frontal view | Food liking | Leave-One-Out Cross-Validation and active feature transformation | 0.61 |
Konstantinidis [37] | 2019 | Computer Vision Systems | OpenPose for mouth and hands tracking; Deep Network (3 Conv + shortcut, 3 Conv + shortcut, 3 LSTM) | Laboratory | 85 videos; Samsung digital camcorder; 1.5 m away from the subject; side view | Automatic bite detection | F-Score: 0.9173 | 0.9175 |
Qiu [38] | 2019 | IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks | Mask R-CNN for 360-degree camera meal videos; Thresholds for assessing pixel intersection between hand-face and hand-food to infer eating events | Free-living (Indoor food sharing scenarios) | Samsung’s gear 360 camera; 1024 × 1024 pixels | Food intake estimation | N.A. | N.A. |
Hossain [39] | 2020 | IEEE Access | Face detection with manual selection of the region of interest; CNN for bite/non bite classification; Optical flow for spatial chewing motion at every pixel | Laboratory | 84 videos; SJCAM SJ4000 Action Camera; 1080p video at 30 fps; side view | Automatic count of bites and chews | Manual annotation with 3-button system and LabView software (custom-made) | Bites: 88.9% ± 7.4%; Chews: 88.64% ± 5.29% |
Rouast and Adam [40] | 2020 | IEEE J. Biomed. Health Inform | CNN for hand-to-mouth movement in 360-degree meal videos | Free-living (Indoor group meal) | 102 videos; 360 fly-4 K camera; 24 fps | Intake gesture detection | N.A. | F1-score: 0.858 |
Konstantinidis [41] | 2020 | Nutrients | OpenPose skeletal and mouth features extracted for training the RABiD algorithm. Two stream data: 2D coordinates and distances from mouth corners, and from upper body | Laboratory | Samsung digital camcorder; 1.5 m away from the subject; side view; resolution: 576p (720 × 576 pixels) at 25 fps | Meal duration and bite counts | Manual annotation (Noldus Observer XT) | F1-score: 0.948 |
Nour [42] | 2021 | Advances in Social Sciences Research Journal | Facial landmarks (dlib) for tracking jawline movement; OpenPose for 2D pose estimation | N.A. | N.A. | Real-time eating activity tracking | Manual annotation | N.A. |
Park [43] | 2020 | Robotics and Autonomous System | Facial landmarks (dlib) for mouth-pose estimator; Algorithmic model for improving 3D estimation, location, and orientation of the mouth | Laboratory | Intel SR300 RGB-D camera | Robot active feeding assistance | Wrist-mounted camera | N.A. |
Alshboul [44] | 2021 | Sensors | Time series data consisting of Euclidean distance between jaw/mouth landmarks and a reference facial landmark | Free-living (outdoors, indoors, and public spaces) | 300 videos; Huawei Y7 Prime 2018 smartphone; 13 MP camera; resolution: 1080p at 30 fps; frontal view | Number of chews | Manual annotation (Intra-class correlation coefficient = slow: 0.96, normal: 0.94, fast: 0.91) | Avg Error ± SD: 5.42% ± 4.61 (slow chewing) 7.47% ± 6.85 (normal chewing) 9.84% ± 9.55 (fast chewing) |
Kato [45] | 2021 | Gerodontology | Video fluoroscopy of swallowing for determining which foods are more appropriate for elderly people | Laboratory | N.A. | Association between masticatory movements and food texture in older adults | N.A. | N.A. |
Method | Advantages | Disadvantages | Open-Source |
---|---|---|---|
Facial Landmarks | Bite detection Chew detection | Camera angle and distance can impact performance Only tested in 2D No swallow detection | Yes |
Deep neural networks | Bite prediction Human body detection | Not efficient for chewing prediction Hardware and software requirements No swallow detection | Yes |
Optical flow | Chewing prediction No camera angle restrictions | No bite detection Motion analysis outperformed by appearance analysis No swallow detection | Yes |
Active appearance model | Chewing detection | No bite detection Not widely used No swallow detection | Yes |
Video fluoroscopy | Swallow tracking | Cost and dimensions | No |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tufano, M.; Lasschuijt, M.; Chauhan, A.; Feskens, E.J.M.; Camps, G. Capturing Eating Behavior from Video Analysis: A Systematic Review. Nutrients 2022, 14, 4847. https://doi.org/10.3390/nu14224847
Tufano M, Lasschuijt M, Chauhan A, Feskens EJM, Camps G. Capturing Eating Behavior from Video Analysis: A Systematic Review. Nutrients. 2022; 14(22):4847. https://doi.org/10.3390/nu14224847
Chicago/Turabian StyleTufano, Michele, Marlou Lasschuijt, Aneesh Chauhan, Edith J. M. Feskens, and Guido Camps. 2022. "Capturing Eating Behavior from Video Analysis: A Systematic Review" Nutrients 14, no. 22: 4847. https://doi.org/10.3390/nu14224847
APA StyleTufano, M., Lasschuijt, M., Chauhan, A., Feskens, E. J. M., & Camps, G. (2022). Capturing Eating Behavior from Video Analysis: A Systematic Review. Nutrients, 14(22), 4847. https://doi.org/10.3390/nu14224847