Next Article in Journal
PVDF Nanofiber Sensor for Vibration Measurement in a String
Next Article in Special Issue
Three-Dimensional Mapping of Clay and Cation Exchange Capacity of Sandy and Infertile Soil Using EM38 and Inversion Software
Previous Article in Journal
Building Extraction from High–Resolution Remote Sensing Images by Adaptive Morphological Attribute Profile under Object Boundary Constraint
Previous Article in Special Issue
Automated Measurement and Control of Germination Paper Water Content
Open AccessArticle

Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs

1
Department of Agricultural and Biosystems Engineering, University of Kassel, 37213 Witzenhausen, Germany
2
School of Natural and Environmental Sciences, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
3
Department of Biosystems and Technology, Swedish University of Agricultural Sciences, 23053 Alnarp, Sweden
4
Department Animal Husbandry, Thuringian State Institute for Agriculture and Rural Development, 07743 Jena, Germany
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(17), 3738; https://doi.org/10.3390/s19173738
Received: 22 July 2019 / Revised: 15 August 2019 / Accepted: 28 August 2019 / Published: 29 August 2019
(This article belongs to the Special Issue Smart Sensing Technologies for Agriculture)
Posture detection targeted towards providing assessments for the monitoring of health and welfare of pigs has been of great interest to researchers from different disciplines. Existing studies applying machine vision techniques are mostly based on methods using three-dimensional imaging systems, or two-dimensional systems with the limitation of monitoring under controlled conditions. Thus, the main goal of this study was to determine whether a two-dimensional imaging system, along with deep learning approaches, could be utilized to detect the standing and lying (belly and side) postures of pigs under commercial farm conditions. Three deep learning-based detector methods, including faster regions with convolutional neural network features (Faster R-CNN), single shot multibox detector (SSD) and region-based fully convolutional network (R-FCN), combined with Inception V2, Residual Network (ResNet) and Inception ResNet V2 feature extractions of RGB images were proposed. Data from different commercial farms were used for training and validation of the proposed models. The experimental results demonstrated that the R-FCN ResNet101 method was able to detect lying and standing postures with higher average precision (AP) of 0.93, 0.95 and 0.92 for standing, lying on side and lying on belly postures, respectively and mean average precision (mAP) of more than 0.93. View Full-Text
Keywords: convolutional neural networks; livestock; lying posture; standing posture convolutional neural networks; livestock; lying posture; standing posture
Show Figures

Figure 1

MDPI and ACS Style

Nasirahmadi, A.; Sturm, B.; Edwards, S.; Jeppsson, K.-H.; Olsson, A.-C.; Müller, S.; Hensel, O. Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs. Sensors 2019, 19, 3738.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map

1
Back to TopTop