Next Article in Journal
Analysis and Optimization of Four-Coil Planar Magnetically Coupled Printed Spiral Resonators
Next Article in Special Issue
RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information
Previous Article in Journal
Wearable Sensors for eLearning of Manual Tasks: Using Forearm EMG in Hand Hygiene Training
Previous Article in Special Issue
An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation
Article Menu

Export Article

Open AccessArticle
Sensors 2016, 16(8), 1222; doi:10.3390/s16081222

DeepFruits: A Fruit Detection System Using Deep Neural Networks

Science and Engineering Faculty, Queensland University of Technology, Brisbane 4000, Australia
*
Author to whom correspondence should be addressed.
Academic Editors: Gabriel Oliver-Codina, Nuno Gracias and Antonio M. López
Received: 19 May 2016 / Revised: 25 July 2016 / Accepted: 26 July 2016 / Published: 3 August 2016
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)

Abstract

This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit. View Full-Text
Keywords: visual fruit detection; deep convolutional neural network; multi-modal; rapid training; real-time performance; harvesting robots; horticulture; agricultural robotics visual fruit detection; deep convolutional neural network; multi-modal; rapid training; real-time performance; harvesting robots; horticulture; agricultural robotics
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Sa, I.; Ge, Z.; Dayoub, F.; Upcroft, B.; Perez, T.; McCool, C. DeepFruits: A Fruit Detection System Using Deep Neural Networks. Sensors 2016, 16, 1222.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top