Next Article in Journal
Orderly Mechanical Seedling-Throwing: An Efficient and High Yielding Establishment Method for Rice Production
Next Article in Special Issue
The Surface Defects Detection of Citrus on Trees Based on a Support Vector Machine
Previous Article in Journal
RTM Inversion through Predictive Equations for Multi-Crop LAI Retrieval Using Sentinel-2 Images
Previous Article in Special Issue
Rapeseed Leaf Estimation Methods at Field Scale by Using Terrestrial LiDAR Point Cloud
 
 
Article
Peer-Review Record

Development and Evaluation of a Watermelon-Harvesting Robot Prototype: Vision System and End-Effector

Agronomy 2022, 12(11), 2836; https://doi.org/10.3390/agronomy12112836
by Jiacheng Rong 1, Jun Fu 1, Zhiqin Zhang 1, Jinliang Yin 1, Yuzhi Tan 1, Ting Yuan 1,* and Pengbo Wang 2,*
Reviewer 2:
Agronomy 2022, 12(11), 2836; https://doi.org/10.3390/agronomy12112836
Submission received: 17 October 2022 / Revised: 1 November 2022 / Accepted: 10 November 2022 / Published: 13 November 2022

Round 1

Reviewer 1 Report

Authors present an interesting topic, actually a very improtant topic in robot harvesting. They face the problem in a very methodological manner from an engineering perspective. However, the novelty is not clear. Authors face detection and manipulation of the watermellon. From a manipulation point of view, the gripper is a lab working model that, as shown, cannot be extended to the industry. Also, the gripper is manipulated by a mobile manipulator and positioning errors are not evaluated. The gripper is always well positioned and well oriented.

From a detection point of view, authors use previously published techniques, and therefore no novelty. However, the idea is interesting and authors need to develop further, by including the following:

- what are the restrictions for the manipulation? what would happen is the gripper is not well placed?

- how sensible is the system to environmental conditions? including lighting.

- how accurate is the detection? is there any ground truth to validate the work?

- references should cover world wide research (that is mandatory for a WoS journal publication). I invite authors to check the following, specially about detection:

DOI: 10.1016/j.compag.2020.105348
DOI: 10.1016/j.compag.2019.105121
DOI: 10.1016/j.biosystemseng.2019.08.017

For machinery harvesting and therefore improve the problem statement:
DOI: 10.1016/j.compag.2021.106103
DOI: 10.1016/j.compag.2020.105757


 

Author Response

Dear Reviewer:

 

Thank you very much for taking time out of your busy schedule to review our article. Your comments are invaluable to us and can significantly improve the quality of our article. Here are our responses to your questions (Question & Answer).

Q1. what are the restrictions for the manipulation? what would happen if the gripper is not well placed?

A1. We consider the impact of positioning errors on the grasping success rate in Section 3.3.2. Errors in positioning can limit the gripping of the end-effector, which was mentioned in the experiment.

I have added a description of this in Section 3.4 “Finally, the end-effector we designed works on small watermelons (see Table 1), which are difficult to clamp for larger sizes. Also, the cutting knife tends to cut the vines when the front of the watermelon is obscured by the vines. Therefore, the design of the end-effector should be more dexterous, and some safety considerations must be made.”.

If the gripper is not well placed, the watermelon will not fall into the right place on the end-effector and the fruit is not clamped correctly. This will require some future optimisation of the shape of the flexible clamping jaws and we mentioned this work in the discussion. In addition, in real scenarios, the cutting knife may cut into the main stem, so the cutting knife needs to be designed inside the gripper. This was inspired by the work of Bear et al, which is illustrated in the discussion.

Q2. how sensible is the system to environmental conditions? including lighting.

A2. Our vision systems can work during the day and can be accurately identified on both sunny and cloudy days. Our image data is collected in different weather conditions and has been considered for recognition stability in different light conditions. The robot vision system does not currently support working at night as we have not added auxiliary lighting. In Section 3.2.1, we have added this information “As this watermelon harvesting robot has no auxiliary lighting added, no image acquisition at night was carried out.”.

Q3. how accurate is the detection? is there any ground truth to validate the work?

A3. The performance of detectors on the watermelon test dataset was shown in Table 2. The improved YOLOv5s-CBAM has a precision of 89.8%, a recall of 94.9%, and an mAP of 96.3%. These metrics used to evaluate the detection result are tested on the watermelon test dataset, which was photoed in the production greenhouse. These images are collected on an agricultural site in Daxing District, Beijing and an agricultural site in Kunshan City, Jiangsu Province (this was mentioned in Section 3.2.1).

Q4. references should cover worldwide research (that is mandatory for a WoS journal publication). I invite authors to check the following

A4. Thank you for your advice. Some of these works you mentioned and other works not mentioned are also covered.

Author Response File: Author Response.docx

Reviewer 2 Report

The paper presents a prototype of a harvesting robot for harvesting watermelons in greenhouses. The topic of the work is very interesting. Also, the deep learning models have been used in the research work to automate the agricultural aspect, but there is a lack of integration of the model on the edge devices and their use on the agricultural robot for real applications. This paper, investigate this important issue. The paper still needs some modifications to be published as a research paper. Here are my suggestions:



1) In the introduction, some sentences are mentioned without sources. For example:

Lines 28-29 (FAo), 40-41, 46-48,

2) Reference numbers should be after the name, for example Reed et al [4]

3) Line 76, which network? Is it computer vision or machine learning?

4) Line 78, reference for the Octrees algorithm and the Hough transform

5) Line 95, if there are a few, you need to reference them and mention what the difference is between your approach and them

6) Missing contribution in the paper

7) Figure 2 adds no information to the paper and is not necessary in my opinion

8) Line 120, what do you mean by L600? Is it the type of watermelon or is it the number of samples?

9) Line 190, provide a reference for the SolidWorks software

10) Line204, what does TPU stand for?

11) Line 271-272, need a reference

12) Figure 9 needs to be improved. It is confusing and there is no (a) in the figure. separate the parts of a picture.

13) I suggest attaching a link to the paper to a video showing how the robot picks the watermelon

Author Response

Dear Reviewer:

 

Thank you very much for taking time out of your busy schedule to review our article. Your comments are invaluable to us and can significantly improve the quality of our article. Here are our responses to your questions (Question & Answer).

 

Q1. In the introduction, some sentences are mentioned without sources. For example: Lines 28-29 (FAO), 40-41, 46-48.

A1. Your comments have helped to improve the details of our articles and we have quoted them. And we have added some global research literatures.

 

Q2. Reference numbers should be after the name, for example Reed et al.

A2. Thank you for the heads up, we hadn't noticed this before now. We have revised this issue.

 

Q3. Line 76, which network? Is it computer vision or machine learning?

A3. Thank you for your careful inspection. I found that we really did not describe the work of others in detail and we have fixed this error. The new sentence is “First, a deep-learning model that includes both detection and segmentation is presented for the recognition of apples in RGB images.”.

 

Q4. Line 78, reference for the Octrees algorithm and the Hough transform.

A4. Line 78 described Kang’s method presented in his paper. In our work, we did not refer this method. And the reference has been replenished. “[18] 7.18.  Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: an efficient probabilistic 3D mapping frame-work based on octrees. Autonomous Robots 2013, 34, 189-206, doi:10.1007/s10514-012-9321-0.”

 

Q5. Line 95, if there are a few, you need to reference them and mention what the difference is between your approach and them.

A5. Thanks for reminding. I have referenced two related work (reference 25 and 26). “Watermelon harvesting experiment of a heavy material handling agricultural robot with LQ control” and “Precise control of clamping force for watermelon picking end-effector”.

 

Q6. Missing contribution in the paper.

A6. Author contribution was metioned. Author Contributions: Conceptualization, Ting Yuan; Funding acquisition, Ting Yuan; Method-ology, Ting Yuan, Jiacheng Rong, Jun Fu and Pengbo Wang; Image Annotation, Jiacheng Rong and Zhiqin Zhang; Software, Jiacheng Rong and Jun Fu; Validation, Jiacheng Rong, Jun Fu, and Jinliang Yin; Writing—original draft, Jiacheng Rong, Yuzhi Tan and Jun Fu; Writing—review and editing, Jiacheng Rong, Ting Yuan and Pengbo Wang. All authors have read and agreed to the published version of the manuscript.

 

Q7. Figure 2 adds no information to the paper and is not necessary in my opinion.

A7. Yes, you are right. It was just used to describe how to measure parameters. However, this process is simple. And I deleted it.

 

Q8. Line 120, what do you mean by L600? Is it the type of watermelon or is it the number of samples?

A8. Yes, L600 and 8424 are two different varieties of watermelon. We introduce the information in the Section 2.1. “Before designing the robot end-effector, each physical parameter of the grasped ob-ject (the watermelon) needed to be fully investigated. We randomly sampled L600 wa-termelons grown on an agricultural site in Daxing District, Beijing, and 8424 watermelons grown on an agricultural site in Kunshan City, Jiangsu Province….”.

 

Q9. Line 190, provide a reference for the SolidWorks software.

A9. I labelled the company information after these software “the SolidWorks software (Dassault Systems SolidWorks Corporation, Canada)”.

 

Q10. Line204, what does TPU stand for?

A10. I'm sorry I didn't make it clear. And I have added in the paper “Thermoplastic polyurethanes (TPU) material”.

 

Q11. Line 271-272, need a reference

A11. Thank you for your reminding. I have finished it.

 

Q12. Figure 9 needs to be improved. It is confusing and there is no (a) in the figure. separate the parts of a picture.

A12. In the original Figure 9, I set Figure 9(a). However, the black letters and Beijing have low contrast and are difficult to see. I have fixed this flaw (see new Figure 8).

 

Q13. I suggest attaching a link to the paper to a video showing how the robot picks the watermelon.

A13. I remember uploading the attachment and it may not have uploaded successfully. I have re-uploaded the video in the Baidu Cloud. You can download from this following link.

web address: https://pan.baidu.com/s/13tc1FQ99aq5nKhzZRdRlQg

Password: szbt

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Authors answered most of my concerns. However the reference section must be improved. As stated in my previous report, it is too biased.

Reviewer 2 Report

The authors addressed all my concerns and I believe it is ready to be published 

Back to TopTop