Enhanced Pet Behavior Prediction via S2GAN-Based Heterogeneous Data Synthesis
Abstract
:1. Introduction
- Improving the accuracy and reliability of behavior prediction through efficient preprocessing and integration for predicting complex behaviors of pets.
- Exploring the possibility of enhancing behavior prediction by addressing existing missing data issues through the S2GAN-based synthesis of heterogeneous data.
- Demonstrating the practicality of the proposed method by applying it to data collected in real-world environments.
2. Related Works
2.1. Data Synthesis for Behavior Prediction
2.2. Behavior Prediction
2.3. Comparison with Previous Works
Research | Image Data | Sensor Data | Algorithms | Number of Behaviors | |||
---|---|---|---|---|---|---|---|
RGB | RGB + Skeleton | Acc | Gyro | Mag | |||
Wang et al. (2022) [2] | O | LSTM | 8 | ||||
Hussain et al. (2022) [26] | O | O | CNN | 10 | |||
Kim et al. (2023) [15] | O | O | O | TNGAN, CNN–LSTM | 9 | ||
Ide et al. (2021) [25] | O | O | O | CNN, LSTM | 11 | ||
Chen et al. (2021) [22] | O | Yolov3 | 6 | ||||
Yu et al. (2022) [27] | O | Videopose3D | 1 | ||||
Lee et al. (2021) [28] | O | TRT-Net | 7 | ||||
Kim et al. (2022) [9] | O | O | O | CNN–LSTM | 7 | ||
Proposed Method | O | O | O | O | O | GAN, CNN–LSTM | 9 |
3. Pet Behavior Prediction via S2GAN-based Data Synthesis
3.1. Data Collection
3.2. Data Preprocessing
3.2.1. Video Preprocessing
3.2.2. Sensor Preprocessing
3.3. S2GAN-Based Data Synthesis
3.4. Pet Behavior Prediction
4. Experiment
4.1. Data Collection Tools and Collected Dataset
4.2. Results of Data Preprocessing
4.2.1. Video Preprocessing
4.2.2. Sensor Preprocessing
4.3. Results of the Data Synthesis Experiment
4.4. Results of the Behavior Prediction Experiment
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Alshamrani, M. IoT and artificial intelligence implementations for remote healthcare monitoring systems: A survey. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 4687–4701. [Google Scholar] [CrossRef]
- Wang, H.; Atif, O.; Tian, J.; Lee, J.; Park, D.; Chung, Y. Multi-level hierarchical complex behavior monitoring system for dog psychological separation anxiety symptoms. Sensors 2022, 22, 1556. [Google Scholar] [CrossRef] [PubMed]
- Zhu, H.; Salgırlı, Y.; Can, P.; Atılgan, D.; Salah, A.A. Video-based estimation of pain indicators in dogs. In Proceedings of the 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, MA, USA, 10–13 September 2023; pp. 1–8. [Google Scholar]
- Mao, A.; Huang, E.; Wang, X.; Liu, K. Deep learning-based animal activity recognition with wearable sensors: Overview, challenges, and future directions. Comput. Electron. Agric. 2023, 211, 108043. [Google Scholar] [CrossRef]
- Chen, R.C.; Saravanarajan, V.S.; Hung, H.T. Monitoring the behaviours of pet cat based on YOLO model and raspberry Pi. Int. J. Appl. Sci. Eng. 2021, 18, 1–12. [Google Scholar] [CrossRef]
- Bleuer-Elsner, S.; Zamansky, A.; Fux, A.; Kaplun, D.; Romanov, S.; Sinitca, A.; van der Linden, D. Computational analysis of movement patterns of dogs with ADHD-like behavior. Animals 2019, 9, 1140. [Google Scholar] [CrossRef] [PubMed]
- Unold, O.; Nikodem, M.; Piasecki, M.; Szyc, K.; Maciejewski, H.; Bawiec, M.; Zdunek, M. IoT-based cow health monitoring system. In Proceedings of the International Conference on Computational Science, Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12141, pp. 344–356. [Google Scholar]
- Dang, L.M.; Min, K.; Wang, H.; Piran, M.J.; Lee, C.H.; Moon, H. Sensor-based and vision-based human activity recognition: A comprehensive survey. Pattern Recognit. 2020, 108, 107561. [Google Scholar] [CrossRef]
- Kim, J.; Moon, N. Dog behavior recognition based on multimodal data from a camera and wearable device. Appl. Sci. 2022, 12, 3199. [Google Scholar] [CrossRef]
- Wen, Q.; Sun, L.; Yang, F.; Song, X.; Gao, J.; Wang, X.; Xu, H. Time series data augmentation for deep learning: A survey. arXiv 2020, arXiv:2002.12478. [Google Scholar]
- Wang, J.; Chen, Y.; Gu, Y.; Xiao, Y.; Pan, H. Sensorygans: An effective generative adversarial framework for sensor-based human activity recognition. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
- Yan, Y.; Xu, J.; Ni, B.; Zhang, W.; Yang, X. Skeleton-aided articulated motion generation. In Proceedings of the 25th ACM International Conference on Multimedia, Mountain View, CA, USA, 23–27 October 2017; pp. 199–207. [Google Scholar]
- Mu, J.; Qiu, W.; Hager, G.D.; Yuille, A.L. Learning from synthetic animals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12386–12395. [Google Scholar]
- Deane, J.; Kearney, S.; Kim, K.I.; Cosker, D. DynaDog+ T: A parametric animal model for synthetic canine image generation. arXiv 2021, arXiv:2107.07330. [Google Scholar]
- Kim, H.; Moon, N. TN-GAN-Based Pet Behavior Prediction through Multiple-Dimension Time-Series Augmentation. Sensors 2023, 23, 4157. [Google Scholar] [CrossRef] [PubMed]
- Gu, F.; Chung, M.H.; Chignell, M.; Valaee, S.; Zhou, B.; Liu, X. A survey on deep learning for human activity recognition. ACM Comput. Surv. CSUR 2021, 54, 1–34. [Google Scholar] [CrossRef]
- Yadav, S.K.; Tiwari, K.; Pandey, H.M.; Akbar, S.A. A review of multimodal human activity recognition with special emphasis on classification, applications, challenges and future directions. Knowl.-Based Syst. 2021, 223, 106970. [Google Scholar] [CrossRef]
- Tavakoli, A.; Kumar, S.; Boukhechba, M.; Heydarian, A. Driver state and behavior detection through smart wearables. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021; pp. 559–565. [Google Scholar]
- Ranieri, C.M.; MacLeod, S.; Dragone, M.; Vargas, P.A.; Romero, R.A.F. Activity recognition for ambient assisted living with videos, inertial units and ambient sensors. Sensors 2021, 21, 768. [Google Scholar] [CrossRef] [PubMed]
- Hafeez, S.; Alotaibi, S.S.; Alazeb, A.; Al Mudawi, N.; Kim, W. Multi-sensor-based Action Monitoring and Recognition via Hybrid Descriptors and Logistic Regression. IEEE Access 2023, 11, 48145–48157. [Google Scholar] [CrossRef]
- Mathis, A.; Mamidanna, P.; Cury, K.M.; Abe, T.; Murthy, V.N.; Mathis, M.W.; Bethge, M. DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 2018, 21, 1281–1289. [Google Scholar] [CrossRef] [PubMed]
- Graving, J.M.; Chae, D.; Naik, H.; Li, L.; Koger, B.; Costelloe, B.R.; Couzin, I.D. DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. eLife 2019, 8, e47994. [Google Scholar] [CrossRef] [PubMed]
- Aich, S.; Chakraborty, S.; Sim, J.S.; Jang, D.J.; Kim, H.C. The design of an automated system for the analysis of the activity and emotional patterns of dogs with wearable sensors using machine learning. Appl. Sci. 2019, 9, 4938. [Google Scholar] [CrossRef]
- Kumpulainen, P.; Cardó, A.V.; Somppi, S.; Törnqvist, H.; Väätäjä, H.; Majaranta, P.; Gizatdinova, Y.; Antink, C.H.; Vehkaoja, A. Dog behaviour classification with movement sensors placed on the harness and the collar. Appl. Anim. Behav. Sci. 2021, 241, 105393. [Google Scholar] [CrossRef]
- Hussain, A.; Ali, S.; Kim, H.C. Activity detection for the wellbeing of dogs using wearable sensors based on deep learning. IEEE Access 2022, 10, 53153–53163. [Google Scholar] [CrossRef]
- Ide, Y.; Araki, T.; Hamada, R.; Ohno, K.; Yanai, K. Rescue dog action recognition by integrating ego-centric video, sound and sensor information. In Pattern Recognition. Proceedings of the ICPR International Workshops and Challenges, Lecture Notes in Computer Science, Virtual, 10–15 January 2021; Springer: Cham, Switzerland, 2021; Volume 12663, pp. 321–333. [Google Scholar]
- Yu, R.; Choi, Y. OkeyDoggy3D: A Mobile Application for Recognizing Stress-Related Behaviors in Companion Dogs Based on Three-Dimensional Pose Estimation through Deep Learning. Appl. Sci. 2022, 12, 8057. [Google Scholar] [CrossRef]
- Lee, H.J.; Ihm, S.Y.; Park, S.H.; Park, Y.H. An Analytic Method for Improving the Reliability of Models Based on a Histogram for Prediction of Companion Dogs’ Behaviors. Appl. Sci. 2021, 11, 11050. [Google Scholar] [CrossRef]
- Stephan, G.; Leidhold, J.; Hammerschmidt, K. Pet dogs home alone: A video-based study. Appl. Anim. Behav. Sci. 2021, 244, 105463. [Google Scholar] [CrossRef]
- Glenn, J. Ultralytics YOLOv8. 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 2 March 2023).
- Wang, S.; Zhang, X.; Ma, F.; Li, J.; Huang, Y. Single-Stage Pose Estimation and Joint Angle Extraction Method for Moving Human Body. Electronics 2023, 12, 4644. [Google Scholar] [CrossRef]
- Biggs, B.; Boyne, O.; Charles, J.; Fitzgibbon, A.; Cipolla, R. Who left the dogs out? 3d animal reconstruction with expectation maximization in the loop. In European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2020; pp. 195–211. [Google Scholar]
- Cao, J.; Tang, H.; Fang, H.S.; Shen, X.; Lu, C.; Tai, Y.W. Cross-domain adaptation for animal pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9498–9507. [Google Scholar]
Behavior | Description |
---|---|
Standing on Two Legs | Standing on hind legs with the body upright, whereas front legs are not touching the ground. |
Standing on All Fours | All four legs touching the ground, joints fully extended in a stationary position. |
Sitting on Two Legs | Sitting on hind legs with the buttocks and hind legs touching the ground, whereas the front legs are not touching the ground. |
Sitting on All Fours | Sitting with the buttocks, hind legs, and belly all touching the ground, whereas the head is not touching the ground. |
Lying Down | Entire body, including buttocks, legs, belly, and head, touching the ground, with the head positioned between the front legs. |
Lying | Back touching the ground, covering both the full back-on-ground pose and partially lying on one side with the belly visible. |
Walking | All four legs not simultaneously touching the ground, alternating steps while moving forward. Encompasses all walking actions. |
Sniffing | Light movement of the head with the nose close to the ground. Encompasses slow walking while sniffing. |
Eating | Having something in the mouth, including actions such as chewing and drinking. |
Component | Specification |
---|---|
CPU | AMD Ryzen 9 5950X |
GPU | NVIDIA RTX 3090 Ti |
RAM | 128 GB |
CUDA | 11.8 |
cuDNN | 8.3 |
Python | 3.10 |
Pytorch | 2.2 |
Keras | 2.13.1 |
Number | Breed | Age (Month) | Weight (kg) |
---|---|---|---|
1 | Yorkshire | 48 | 7.8 |
2 | Toy Poodle | 76 | 4.7 |
3 | Toy Poodle | 150 | 5 |
4 | Toy Poodle | 130 | 1.5 |
5 | Miniature Poodle | 68 | 2.2 |
6 | Miniature Bichon | 94 | 3.1 |
7 | Mixed Breed | 36 | 4.6 |
8 | Mixed Breed | 42 | 4.2 |
9 | Border Collie | 24 | 14.5 |
10 | Mixed Breed | 12 | 6.3 |
Number | Behaviors | Total |
---|---|---|
0 | Standing on Two Legs | 486 |
1 | Standing on All Fours | 4755 |
2 | Sitting on Two Legs | 5762 |
3 | Sitting on All Fours | 5079 |
4 | Lying Down | 1502 |
5 | Lying | 935 |
6 | Walking | 1995 |
7 | Sniffing | 776 |
8 | Eating | 1986 |
Total | 23,276 |
Dataset | Utilized Data | Total | |
---|---|---|---|
Stanford Extra | Animal Pose | ||
Training Data | 14,355 | 1070 | 15,425 |
Validation Data | 2009 | 159 | 2168 |
Test Data | 4216 | 542 | 4758 |
Total | 20,580 | 1771 | 22,351 |
Dataset | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
100% real data | 0.954 | 0.945 | 0.947 | 0.945 |
50% real data | 0.919 | 0.882 | 0.877 | 0.919 |
50% real data + 50% synthetic data | 0.928 | 0.891 | 0.924 | 0.906 |
70% real data + 30% synthetic data | 0.937 | 0.904 | 0.928 | 0.915 |
100% real video data only | 0.907 | 0.861 | 0.866 | 0.862 |
100% real sensor data only | 0.838 | 0.817 | 0.816 | 0.816 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, J.; Moon, N. Enhanced Pet Behavior Prediction via S2GAN-Based Heterogeneous Data Synthesis. Appl. Sci. 2024, 14, 4091. https://doi.org/10.3390/app14104091
Kim J, Moon N. Enhanced Pet Behavior Prediction via S2GAN-Based Heterogeneous Data Synthesis. Applied Sciences. 2024; 14(10):4091. https://doi.org/10.3390/app14104091
Chicago/Turabian StyleKim, Jinah, and Nammee Moon. 2024. "Enhanced Pet Behavior Prediction via S2GAN-Based Heterogeneous Data Synthesis" Applied Sciences 14, no. 10: 4091. https://doi.org/10.3390/app14104091
APA StyleKim, J., & Moon, N. (2024). Enhanced Pet Behavior Prediction via S2GAN-Based Heterogeneous Data Synthesis. Applied Sciences, 14(10), 4091. https://doi.org/10.3390/app14104091