Multiple Feature Dependency Detection for Deep Learning Technology—Smart Pet Surveillance System Implementation
Abstract
:1. Introduction
2. Literature Review
3. Materials and Methods
3.1. Pet Camera
Algorithm 1 Loop_Recording_Processing |
Require: threading |
Require: video_store_path video_path |
Require: count variable time_sup = 0 |
1. def timer(): |
2. time_sup += 1 |
3. main(): |
4. threading.timer() |
5. set video format and information: v_ format |
6. while(1): |
7. set video duration: time |
8. if time_sup == time: |
9. store the video to video_path |
10. time_sup = 0 |
11. else: |
12. recording the video according to v_ format |
13. return 0 |
3.2. Identification System
Algorithm 2 Data_Preprocessing |
Require: Loop_Recording_Processing_output video_path |
Require: image_save_path i_path |
Require: audio save_path a_path |
1. def video_to_img(video_path, i_path): |
2. images obtained from the video: i_image |
3. i_image store to i_path |
4. def video_to_wav(video_path, v_path): |
5. audio = audio from the video |
6. mfcc_audio = audio use mfcc for feature extraction |
7. mfcc_audio store to a_path |
8. main(): |
9. video_to_img(video_path, i_path) |
10. video_to_wav(video_path, a_path) |
11. return 0 |
Algorithm 3 Feature_Extraction |
Require: Faster R-CNN output list faster_output_list |
Require: KNN audio model output knn_output |
Require: path of all unprocessed training data data_path |
1. a dict of feature information: feature_dict |
2. for feature_info in faster_output_list: |
3. feature_dict append feature information from feature_info |
4. feature_dict append knn_output |
5. store feature data & image to data_path |
6. return feature_dict |
Algorithm 4 Multiple_Feature_Dependency_Detection_Arithmetic_Method |
Require: Feature_Extraction output feature_dict |
1. def mood(feature_dict): |
2. from feature_dict get information to judge pet mood: data |
3. if data is enough to judge mood: |
4. return “mood” |
5. else: |
6. return “normal” |
7. main(): |
8. list of mood: mood_list |
9. list of pet status: pet_status |
10. for mood in mood_list: |
11. pet_status = mood(feature_dict) |
12. return pet_status |
3.3. Communication Software and Message Transmission
Algorithm 5 Message Transmission |
Require: pet_status status |
1. owner_id: id |
2. dict of message: dict_msg |
3. for msg_mood in dict_msg.keys(): |
4. if msg_mood == status: |
5. send pet’s status message to the id |
3.4. Automatic System Optimization
Algorithm 6 Model_Automatic_Optimization_Module |
Require: path of all unprocessed training data from Feature_Extraction: data_path |
Require: path of all processed training data t_path |
1. def Retraining_Data_Generation_Module(data_path): |
2. get data and images stored for training: data_list |
3. for data in data_list: |
4. image = cut the original image according to x, y |
5. labeling image base on x, y coordinates and store to t_path |
6. main(): |
7. Retraining_Data_Generation_Module(data_path) |
8. Faster R-CNN Training Module(t_path) |
4. Results and Discussion
4.1. System Execution Result
Algorithm 7 main |
Require: threading |
Require: Loop_Recording_Processing |
Require: Data_Preprocessing |
Require: Faster R-CNN |
Require: KNN audio model |
Require: Feature_Extraction |
Require: Multiple_Feature_Dependency_Detection_Arithmetic_Method |
Require: Message Transmission |
Require: Model_Automatic_Optimization_Module |
1. def recording: |
2. threading.Loop_Recording_Processing() |
3. def Model_Automatic_Optimization_Module: |
4. threading.Model_Automatic_Optimization_Module() |
5. main(): |
6. recording() |
7. while(1): |
8. Data_Preprocessing() |
9. feature_dict = Feature_Extraction(Faster R-CNN(), KNN audio model()) |
10. pet_status = Multiple_Feature_Dependency_Detection_Arithmetic_Method(feature_dict) |
11. Message Transmission(pet_status) |
4.2. Types of Mood and Ways of Judging
4.3. The Accuracy of the Model for Identifying Features
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 142–158. [Google Scholar] [CrossRef] [PubMed]
- Girshick, R. fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hao, X.; Yang, G.; Ye, Q.; Lin, D. Rare animal image recognition based on convolutional neural networks. In Proceedings of the 2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Suzhou, China, 19–21 October 2019; pp. 1–5. [Google Scholar]
- Wu, M.; Chen, L. Image recognition based on deep learning. In Proceedings of the 2015 Chinese Automation Congress (CAC), Wuhan, China, 27–29 November 2015; pp. 542–546. [Google Scholar]
- Ruan, F.; Zhang, X.; Zhu, D.; Xu, Z.; Wan, S.; Qi, L. Deep learning for real-time image steganalysis: A survey. J. Real Time Image Process. 2020, 17, 149–160. [Google Scholar] [CrossRef]
- Nishani, E.; Cico, B. Computer vision approaches based on deep learning and neural networks: Deep neural networks for video analysis of human pose estimation. In Proceedings of the IEEE Mediterranean Conference on Embedded Computing, Bar, Montenegro, 11–15 June 2017; pp. 1–4. [Google Scholar]
- Mukai, N.; Zhang, Y.; Chang, Y. Pet face detection. In Proceedings of the 2018 Nicograph International (NicoInt), Tainan, Taiwan, 28–29 June 2018; pp. 52–57. [Google Scholar]
- Kumar, S.; Singh, S.K. Biometric recognition for pet animal. J. Softw. Eng. Appl. 2014, 7, 470–482. [Google Scholar] [CrossRef] [Green Version]
- Lin, C.; Lin, Y.; Chang, C.; Chen, C.; Tsai, M. The design of automatic bird data capture systems. In Proceedings of the IEEE International Conference on Consumer Electronics, Taichung, Taiwan, 19–21 May 2018; pp. 1–2. [Google Scholar]
- Jothi, G.; Inbarani, H.; Azar, A.; Koubaa, A.; Kamal, N.; Fouad, K. improved dominance soft set based decision rules with pruning for leukemia image classification. Electronic 2020, 9, 794–822. [Google Scholar] [CrossRef]
- Mao, Q.; Sun, H.; Liu, Y.; Jia, R. Mini-YOLOv3: Real-time object detector for embedded applications. IEEE Access 2019, 7, 133529–133538. [Google Scholar] [CrossRef]
- WON, J.; LEE, D.; LEE, K.; LIN, C. An improved YOLOv3-based neural network for de-identification technology. In Proceedings of the IEEE International Technical Conference on Circuits/Systems, Computers and Communications, JeJu, Korea, 23–26 June 2019; pp. 1–2. [Google Scholar]
- Kong, W.; Hong, J.; Jia, M.; Yao, J.; Cong, W.; Hu, H.; Zhang, H. YOLOv3-DPFIN: A dual-path feature fusion neural network for robust real-time sonar target detection. IEEE Sens. J. 2020, 20, 3745–3756. [Google Scholar] [CrossRef]
- Li, S.; Tao, F.; Shi, T.; Kuang, J. Improvement of YOLOv3 network based on ROI. In Proceedings of the IEEE Advanced Information Technology, Electronic and Automation Control Conference, Chengdu, China, 20–22 December 2019; pp. 2590–2595. [Google Scholar]
- Arruda, M.; Spadon, G.; Rodrigues, J.; Gonçalves, W.; Machado, B. Recognition of endangered pantanal animal species using deep learning methods. In Proceedings of the IEEE International Joint Conference on Neural Networks, Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
- Blumrosen, G.; Hawellek, D.; Pesaran, B. Towards automated recognition of facial expressions in animal models. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 2810–2819. [Google Scholar]
- Tu, X.; Lai, K.; Yanushkevich, S. Transfer learning on convolutional neural networks for dog identification. In Proceedings of the IEEE International Conference on Software Engineering and Service Science, Beijing, China, 23–25 November 2018; pp. 357–360. [Google Scholar]
- Hammam, A.; Soliman, M.; Hassanein, A. DeepPet: A pet animal tracking system in internet of things using deep neural networks. In Proceedings of the IEEE International Conference on Computer Engineering and Systems, Beijing, China, 18–20 August 2018; pp. 38–43. [Google Scholar]
- Reulke, R.; Rues, D.; Deckers, N.; Barnewitz, D.; Wieckert, A.; Kienapfel, K. Analysis of motion patterns for pain estimation of horses. In Proceedings of the IEEE International Conference on Advanced Video and Signal Based Surveillance, Genova, Italy, 2–4 September 2018; pp. 1–6. [Google Scholar]
- Khosla, A.; Jayadevaprakash, N.; Yao, B.; Li, F. Stanford Dogs Dataset. Available online: http://vision.stanford.edu/aditya86/ImageNetDogs/ (accessed on 29 November 2019).
- Google, AudioSet. Available online: https://research.google.com/audioset/ontology/dog_1.html (accessed on 11 August 2020).
Feature | Image | Voice | |
---|---|---|---|
Mood | |||
Happy | mouth keeps opening and tail swing continuously | Not needs | |
Angry | mouth keeps opening and closing | Growl | |
Sad | mouth keeps closing | Crying | |
Normal | other actions cannot be identified as any of the above moods |
Feature Number | 20 (without/with) MFCC | 30 (without/with) MFCC | 40 (without/with) MFCC | ||||||
---|---|---|---|---|---|---|---|---|---|
Barking | Growl | Crying | Barking | Growl | Crying | Barking | Growl | Crying | |
Bark | 33/34 | 8/1 | 9/14 | 35/35 | 6/1 | 6/13 | 32/35 | 5/1 | 3/13 |
Growl | 2/0 | 19/33 | 16/2 | 1/0 | 18/33 | 17/2 | 3/0 | 21/33 | 18/2 |
Cry | 2/3 | 10/3 | 12/21 | 1/2 | 13/3 | 14/22 | 2/2 | 11/3 | 16/22 |
Average Accuracy | 57.66%/79.28% | 60.36%/81.08% | 62.16%/81.08% | ||||||
60.06%/80.48% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tsai, M.-F.; Lin, P.-C.; Huang, Z.-H.; Lin, C.-H. Multiple Feature Dependency Detection for Deep Learning Technology—Smart Pet Surveillance System Implementation. Electronics 2020, 9, 1387. https://doi.org/10.3390/electronics9091387
Tsai M-F, Lin P-C, Huang Z-H, Lin C-H. Multiple Feature Dependency Detection for Deep Learning Technology—Smart Pet Surveillance System Implementation. Electronics. 2020; 9(9):1387. https://doi.org/10.3390/electronics9091387
Chicago/Turabian StyleTsai, Ming-Fong, Pei-Ching Lin, Zi-Hao Huang, and Cheng-Hsun Lin. 2020. "Multiple Feature Dependency Detection for Deep Learning Technology—Smart Pet Surveillance System Implementation" Electronics 9, no. 9: 1387. https://doi.org/10.3390/electronics9091387