Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = floor litter recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3693 KiB  
Article
Autonomous Visual Navigation System Based on a Single Camera for Floor-Sweeping Robot
by Jinjun Rao, Haoran Bian, Xiaoqiang Xu and Jinbo Chen
Appl. Sci. 2023, 13(3), 1562; https://doi.org/10.3390/app13031562 - 25 Jan 2023
Cited by 6 | Viewed by 4801
Abstract
The indoor sweeping robot industry has developed rapidly in recent years. The current sweeping robot environment perception sensor configuration is more diverse and generally does not have active garbage detection capabilities. Advances in computer vision technology, artificial intelligence, and cloud computing technology have [...] Read more.
The indoor sweeping robot industry has developed rapidly in recent years. The current sweeping robot environment perception sensor configuration is more diverse and generally does not have active garbage detection capabilities. Advances in computer vision technology, artificial intelligence, and cloud computing technology have provided new possibilities for the development of sweeping robot technology. This paper conceptualizes a new autonomous visual navigation system based on a single-camera sensor for floor-sweeping robots. It investigates critical technologies such as floor litter recognition, environmental perception, and dynamic local path planning based on depth maps. The system is applied to the TurtleBot robot for experiments, and the results show that the mAP accuracy of the autonomous visual navigation system for fine trash recognition is 91.28%; it reduces the average relative error of depth perception by 10.4% compared to conventional methods. Moreover, it has greatly improved the dynamics and immediacy of path planning. Full article
(This article belongs to the Special Issue Mobile Robotics and Autonomous Intelligent Systems)
Show Figures

Figure 1

12 pages, 5829 KiB  
Article
A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor
by Xiao Yang, Lilong Chai, Ramesh Bahadur Bist, Sachin Subedi and Zihao Wu
Animals 2022, 12(15), 1983; https://doi.org/10.3390/ani12151983 - 5 Aug 2022
Cited by 62 | Viewed by 7225
Abstract
Real-time and automatic detection of chickens (e.g., laying hens and broilers) is the cornerstone of precision poultry farming based on image recognition. However, such identification becomes more challenging under cage-free conditions comparing to caged hens. In this study, we developed a deep learning [...] Read more.
Real-time and automatic detection of chickens (e.g., laying hens and broilers) is the cornerstone of precision poultry farming based on image recognition. However, such identification becomes more challenging under cage-free conditions comparing to caged hens. In this study, we developed a deep learning model (YOLOv5x-hens) based on YOLOv5, an advanced convolutional neural network (CNN), to monitor hens’ behaviors in cage-free facilities. More than 1000 images were used to train the model and an additional 200 images were adopted to test it. One-way ANOVA and Tukey HSD analyses were conducted using JMP software (JMP Pro 16 for Mac, SAS Institute, Cary, North Caronia) to determine whether there are significant differences between the predicted number of hens and the actual number of hens under various situations (i.e., age, light intensity, and observational angles). The difference was considered significant at p < 0.05. Our results show that the evaluation metrics (Precision, Recall, F1 and mAP@0.5) of the YOLOv5x-hens model were 0.96, 0.96, 0.96 and 0.95, respectively, in detecting hens on the litter floor. The newly developed YOLOv5x-hens was tested with stable performances in detecting birds under different lighting intensities, angles, and ages over 8 weeks (i.e., birds were 8–16 weeks old). For instance, the model was tested with 95% accuracy after the birds were 8 weeks old. However, younger chicks such as one-week old birds were harder to be tracked (e.g., only 25% accuracy) due to interferences of equipment such as feeders, drink lines, and perches. According to further data analysis, the model performed efficiently in real-time detection with an overall accuracy more than 95%, which is the key step for the tracking of individual birds for evaluation of production and welfare. However, there are some limitations of the current version of the model. Error detections came from highly overlapped stock, uneven light intensity, and images occluded by equipment (i.e., drinking line and feeder). Future research is needed to address those issues for a higher detection. The current study established a novel CNN deep learning model in research cage-free facilities for the detection of hens, which provides a technical basis for developing a machine vision system for tracking individual birds for evaluation of the animals’ behaviors and welfare status in commercial cage-free houses. Full article
Show Figures

Figure 1

Back to TopTop