Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion
AbstractAutomatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. View Full-Text
Share & Cite This Article
Zhao, Y.; Gong, L.; Huang, Y.; Liu, C. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion. Sensors 2016, 16, 173.
Zhao Y, Gong L, Huang Y, Liu C. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion. Sensors. 2016; 16(2):173.Chicago/Turabian Style
Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang. 2016. "Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion." Sensors 16, no. 2: 173.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.