FieldSAFE: Dataset for Obstacle Detection in Agriculture
Abstract
:1. Introduction
2. Sensor Setup
3. Dataset
4. Ground Truth
5. Summary and Future Work
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Abidine, A.Z.; Heidman, B.C.; Upadhyaya, S.K.; Hills, D.J. Autoguidance system operated at high speed causes almost no tomato damage. Calif. Agric. 2004, 58, 44–47. [Google Scholar] [CrossRef]
- Case IH. Case IH Autonomous Concept Vehicle, 2016. Available online: http://www.caseih.com/apac/en-in/news/pages/2016-case-ih-premieres-concept-vehicle-at-farm-progress-show.aspx (accessed on 9 August 2017).
- ASI. Autonomous Solutions, 2016. Available online: https://www.asirobots.com/farming/ (accessed on 9 August 2017).
- Kubota, 2017. Available online: http://www.kubota-global.net/news/2017/20170125.html (accessed on 16 August 2017).
- Ollis, M.; Stentz, A. Vision-based perception for an automated harvester. In Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems, Innovative Robotics for Real-World Applications (IROS ’97), Grenoble, France, 11 September 1997; Volume 3, pp. 1838–1844. [Google Scholar]
- Stentz, A.; Dima, C.; Wellington, C.; Herman, H.; Stager, D. A system for semi-autonomous tractor operations. Auton. Robots 2002, 13, 87–104. [Google Scholar] [CrossRef]
- Wellington, C.; Courville, A.; Stentz, A.T. Interacting markov random fields for simultaneous terrain modeling and obstacle detection. In Proceedings of the Robotics: Science and Systems, Cambridge, MA, USA, 8–11 June 2005; Volume 17, pp. 251–260. [Google Scholar]
- Griepentrog, H.W.; Andersen, N.A.; Andersen, J.C.; Blanke, M.; Heinemann, O.; Madsen, T.E.; Nielsen, J.; Pedersen, S.M.; Ravn, O.; Wulfsohn, D. Safe and reliable: Further development of a field robot. Precis. Agric. 2009, 9, 857–866. [Google Scholar]
- Moorehead, S.S.J.; Wellington, C.K.C.; Gilmore, B.J.; Vallespi, C. Automating orchards: A system of autonomous tractors for orchard maintenance. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Workshop on Agricultural Robotics, Vilamoura, Portugal, 7–12 October 2012; p. 632. [Google Scholar]
- Reina, G.; Milella, A. Towards autonomous agriculture: Automatic ground detection using trinocular stereovision. Sensors 2012, 12, 12405–12423. [Google Scholar] [CrossRef]
- Emmi, L.; Gonzalez-De-Soto, M.; Pajares, G.; Gonzalez-De-Santos, P. New trends in robotics for agriculture: Integration and assessment of a real fleet of robots. Sci. World J. 2014, 2014. [Google Scholar] [CrossRef] [PubMed]
- Ross, P.; English, A.; Ball, D.; Upcroft, B.; Corke, P. Online novelty-based visual obstacle detection for field robotics. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 3935–3940. [Google Scholar]
- Ball, D.; Upcroft, B.; Wyeth, G.; Corke, P.; English, A.; Ross, P.; Patten, T.; Fitch, R.; Sukkarieh, S.; Bate, A. Vision-based obstacle detection and navigation for an agricultural robot. J. Field Robot. 2016, 33, 1107–1130. [Google Scholar] [CrossRef]
- Reina, G.; Milella, A.; Rouveure, R.; Nielsen, M.; Worst, R.; Blas, M.R. Ambient awareness for agricultural robotic vehicles. Biosyst. Eng. 2016, 146, 114–132. [Google Scholar] [CrossRef]
- Didi. Didi Data Release #2—Round 1 Test Sequence and Training. Available online: http://academictorrents.com/details/18d7f6be647eb6d581f5ff61819a11b9c21769c7 (accessed on 8 November 2017).
- Udacity. Udacity Didi Challenge—Round 2 Dataset. Available online: http://academictorrents.com/details/67528e562da46e93cbabb8a255c9a8989be3448e (accessed on 8 November 2017).
- Udacity, Didi. Udacity Didi $100k Challenge Dataset 1. Available online: http://academictorrents.com/details/76352487923a31d47a6029ddebf40d9265e770b5 (accessed on 8 November 2017).
- DIPLECS. DIPLECS Autonomous Driving Datasets, 2015. Available online: http://ercoftac.mech.surrey.ac.uk/data/diplecs/ (accessed on 31 August 2017).
- Koschorrek, P.; Piccini, T.; Öberg, P.; Felsberg, M.; Nielsen, L.; Mester, R. A multi-sensor traffic scene dataset with omnidirectional video. Ground Truth—What is a good dataset? In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Portland, OR, USA, 23–28 June 2013. [Google Scholar]
- Maddern, W.; Pascoe, G.; Linegar, C.; Newman, P. 1 Year, 1000 km: The Oxford RobotCar Dataset. Int. J. Robot. Res. 2017, 36, 3–15. [Google Scholar] [CrossRef]
- InSight. InSight SHRP2, 2017. Available online: https://insight.shrp2nds.us/ (accessed on 31 August 2017).
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets Robotics: The KITTI Dataset. Int. J. Robot. Res. (IJRR) 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Matzen, K.; Snavely, N. NYC3DCars: A dataset of 3D vehicles in geographic context. In Proceedings of the International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013. [Google Scholar]
- Caraffi, C.; Vojir, T.; Trefny, J.; Sochman, J.; Matas, J. A system for real-time detection and tracking of vehicles from a single Car-Mounted camera. In Proceedings of the 2012 15th International IEEE Conference on Intelligent Transportation Systems (ITSC), Anchorage, AK, USA, 16–19 September 2012; pp. 975–982. [Google Scholar]
- Ros, G.; Sellart, L.; Materzynska, J.; Vazquez, D.; Lopez, A. The SYNTHIA Dataset: A Large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Gaidon, A.; Wang, Q.; Cabon, Y.; Vig, E. Virtual worlds as proxy for multi-object tracking analysis. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Neuhold, G.; Ollmann, T.; Rota Bulò, S.; Kontschieder, P. The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Peynot, T.; Scheding, S.; Terho, S. The Marulan Data Sets: Multi-sensor perception in natural environment with challenging conditions. Int. J. Robot. Res. 2010, 29, 1602–1607. [Google Scholar] [CrossRef] [Green Version]
- Pezzementi, Z.; Tabor, T.; Hu, P.; Chang, J.K.; Ramanan, D.; Wellington, C.; Babu, B.P.W.; Herman, H. Comparing apples and oranges: Off-road pedestrian detection on the NREC agricultural person-detection dataset. arXiv, 2017; 1707.07169. [Google Scholar]
- Quigley, M.; Conley, K.; Gerkey, B.P.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An Open-Source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 17 May 2009. [Google Scholar]
- Moore, T.; Stouch, D. A Generalized extended kalman filter implementation for the robot operating system. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2016. [Google Scholar]
- Lütkebohle, I. Determinism in Robotics Software. Conference Presentation, ROSCon. 2017. Available online: https://roscon.ros.org/2017/presentations/ROSCon%202017%20Determinism%20in%20ROS.pdf (accessed on 31 October 2017).
- Christiansen, P.; Kragh, M.; Steen, K.A.; Karstoft, H.; Jørgensen, R.N. Platform for evaluating sensors and human detection in autonomous mowing operations. Precis. Agric. 2017, 18, 350–365. [Google Scholar] [CrossRef]
- Pix4D. 2014. Available online: http://pix4d.com/ (accessed on 5 September 2017).
- Vondrick, C.; Patterson, D.; Ramanan, D. Efficiently scaling up crowdsourced video annotation. Int. J. Comput. Vis. 2013, 101, 184–204. [Google Scholar] [CrossRef]
Dataset | Environment | Length | Localization | Sensors | Obstacles | Annotations |
---|---|---|---|---|---|---|
KITTI [22] | urban | 6 h | ✓ | stereo camera, LiDAR | cars, trucks, trams, pedestrians, cyclists | 2D + 3D bounding boxes |
Oxford [20] | urban | 1000 km | ✓ | stereo camera, LiDARs, color cameras | cars, trucks, pedestrians, cyclists | none |
Marulan [29] | rural | 2 h | ✓ | lasers, radar, color camera, infra-red camera | humans, box, poles, bricks, vegetation | none |
NREC [30] | orchards | 8 h | ✓ | stereo camera | humans, vegetation | bounding boxes (only humans) |
FieldSAFE (ours) | grass field | 2 h | ✓ | stereo camera, web camera, thermal camera, 360 camera, LiDAR, radar | humans, mannequins, rocks, barrels, buildings, vehicles, vegetation | GPS position and labels |
Sensor | Model | Resolution | FOV | Range | Acquisition Rate |
---|---|---|---|---|---|
Stereo camera | Multisense S21 CMV2000 | 1024 × 544 | 85× 50 | 1.5–50 m | 10 fps |
Web camera | Logitech HD Pro C920 | 1920 × 1080 | 70× 43 | - | 20 fps |
360 camera | Giroptic 360cam | 2048 × 833 | 360 × 292 | - | 30 fps |
Thermal camera | Flir A65, 13 mm lens | 640 × 512 | 45 × 37 | - | 30 fps |
LiDAR | Velodyne HDL-32E | 2172 × 32 | 360 × 40 | 1–100 m | 10 fps |
Radar | Delphi ESR | 16 targets/frame | 90 × 4.2 | 0–60 m | 20 fps |
16 targets/frame | 20 × 4.2 | 0–174 m | 20 fps |
Sensor | Model | Description | Acquisition Rate |
---|---|---|---|
GPS | Trimble BD982 GNSS | Dual antenna RTK GNSS system. Measures position and horizontal heading of the platform. | 20 Hz |
IMU | Vectornav VN-100 | Measures acceleration, angular velocity, magnetic field and barometric pressure. | 50 Hz |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kragh, M.F.; Christiansen, P.; Laursen, M.S.; Larsen, M.; Steen, K.A.; Green, O.; Karstoft, H.; Jørgensen, R.N. FieldSAFE: Dataset for Obstacle Detection in Agriculture. Sensors 2017, 17, 2579. https://doi.org/10.3390/s17112579
Kragh MF, Christiansen P, Laursen MS, Larsen M, Steen KA, Green O, Karstoft H, Jørgensen RN. FieldSAFE: Dataset for Obstacle Detection in Agriculture. Sensors. 2017; 17(11):2579. https://doi.org/10.3390/s17112579
Chicago/Turabian StyleKragh, Mikkel Fly, Peter Christiansen, Morten Stigaard Laursen, Morten Larsen, Kim Arild Steen, Ole Green, Henrik Karstoft, and Rasmus Nyholm Jørgensen. 2017. "FieldSAFE: Dataset for Obstacle Detection in Agriculture" Sensors 17, no. 11: 2579. https://doi.org/10.3390/s17112579
APA StyleKragh, M. F., Christiansen, P., Laursen, M. S., Larsen, M., Steen, K. A., Green, O., Karstoft, H., & Jørgensen, R. N. (2017). FieldSAFE: Dataset for Obstacle Detection in Agriculture. Sensors, 17(11), 2579. https://doi.org/10.3390/s17112579