Next Article in Journal
Growth and Acclimation of In Vitro-Propagated M9 Apple Rootstock Plantlets under Various Visible Light Spectrums
Next Article in Special Issue
Evaluation of In-Season Management Zones from High-Resolution Soil and Plant Sensors
Previous Article in Journal
Wild and Cultivated Sunflower (Helianthus annuus L.) Do Not Differ in Salinity Tolerance When Taking Vigor into Account
Previous Article in Special Issue
Smart Palm: An IoT Framework for Red Palm Weevil Early Detection
 
 
Article
Peer-Review Record

Using YOLOv3 Algorithm with Pre- and Post-Processing for Apple Detection in Fruit-Harvesting Robot

Agronomy 2020, 10(7), 1016; https://doi.org/10.3390/agronomy10071016
by Anna Kuznetsova, Tatiana Maleva and Vladimir Soloviev *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Agronomy 2020, 10(7), 1016; https://doi.org/10.3390/agronomy10071016
Submission received: 19 June 2020 / Revised: 29 June 2020 / Accepted: 7 July 2020 / Published: 14 July 2020
(This article belongs to the Special Issue Precision Agriculture for Sustainability)

Round 1

Reviewer 1 Report

Dear authors,

This reviewer appreciate the efforts taken in restructuring the manuscript and adding additional sections to complete the work.

There are still two minor things that in this reviewers opinion should be addressed.

In line 363, somehow the interpretation of the result could be misleading. If I have correctly understood the results, 90.9% is the FNR. This means that fruits have been detected where they shoud not have. So the sentence "90.9% of fruits were not detected" is misleading. I think that "90.9% of fruits were incorrectly detected" could be more appropriate, or something similar.

In line 610, I believe the sentence "There were also no splits" is more appropriate than "There also were ...".

Regards,

 

 

Author Response

Thank you very much for your attentiveness and friendly comments.

Point 1. In line 363, somehow the interpretation of the result could be misleading. If I have correctly understood the results, 90.9% is the FNR. This means that fruits have been detected where they should not have. So the sentence "90.9% of fruits were not detected" is misleading. I think that "90.9% of fruits were incorrectly detected" could be more appropriate, or something similar.

Response. There was an error in the formula for FNR [which is not FP/(TP+FN), but FN/(TP+FN) - line 261 was corrected]. 90.9 FNR means that the algorithm has not recognized 90.9% of apples in images.

Point 2. In line 610, I believe the sentence "There were also no splits" is more appropriate than "There also were ...".

Response. The sentence was changed (line 399).

Best wishes,

Vladimir

Reviewer 2 Report

I'd like to thank the authors for addressing my concerns. All my concerns were addressed and I have no further comments.

Author Response

Dear Anonymous Reviewer,

Thank you very much for your attentiveness and friendly comments.

Best wishes,

Vladimir

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

The abstract and the introduction of the present document definetely catch the attention of the reader as it appears to be a very attractive experiment with promisisng results. However, the content of the paper after that point is full of inconsistencies and the way it has been structured and presented made me suggest the authors a complete rework both in the document and in some of the experiments.

First of all, after reading through the entire document it is not clear what the added value of the presented work is. A standard YOLO-v3 algorithm has been used to detect apples, and then some minor pre and postprocessing techniques have been applied to improve the results. But those processing techniques, which are the only apparent novelty of the work are not even elaborated, simply stated in a few lines.

Second, the authors claim to have developed a harvesting robot based on two stationary Sony cameras. Nonetheless, in lines 223 and 224 it is mentioned that the standard YOLO-v3 algorithm used in this work has been trained on the COCO dataset images. In this reviewers opinion, before presenting any results, the document should clearly introduce all datasets that have been used with the important details mentioned, such as the total number of images available from each dataset, how many have been used to train the algorithm and how many were used for testing. This should all be stated in some sort of table right before presenting the results. Some other information such as the number of apples available per image (as average) and/or in the entire dataset could also be interesting to know. As one of the appeals of the work carried out is the fact that is going to be used in an actual robot, this should be appropriately emphasized in the document. For this reviewer, out of all images presented in the document, it is still not clear which ones were taken with the Sony cameras available in the robot.

Third, another section of the document should be how the algorithm has been parameterized. For instance, any deep learning approach uses an optimization algorithm to converge into a solution. This has not been mentioned, neither the parameters of such algorithm (lambda, ...). In the particular case of the YOLO, another parameter is the number of windows the image is divided into. In line 288, it is stated that images have been divided in 9 regions, but it is not mentioned if other configurations were used and if the results obtained were better or worst.

Finally, the presentation of the results has been poorly done. Just a single table with some FNR and FPR values does not let the reader get an intuition of how good the algorithm performs. Absolute values are missing, such as how many apples in total were correctly detected, how many were not detected. The Intersection Over Union has not been included either.

 

In overall, a very promising application that needs a complete rework though.

 

 

 

 

Author Response

Dear Reviewer,

Thank you for the review. It helped us to rework the paper significantly, and all your concerns and suggestions helped us to understand our results much better.

 

>First of all, after reading through the entire document it is not clear what the added value of the presented work is. A standard YOLO-v3 algorithm has been used to detect apples, and then some minor pre and post-processing techniques have been applied to improve the results. But those processing techniques, which are the only apparent novelty of the work are not even elaborated, simply stated in a few lines.

Response: The paper aims to show that YOLOv3 algorithm could be used in harvesting robots in order to detect apples in orchards effectively. If we apply this algorithm directly to images taken in real orchards, the detection quality is quite poor. However, image preprocessing before applying YOLOv3 helps to increase fruit detection rate from 10% to 91%.

We restructured the paper by adding special subsections for Image Acquisition (2.2), Apple Detection Quality Evaluation (2.3), using YOLOv3 without pre- and post-processing (2.4), and provided details of pre- and post-processing techniques (2.5, 2.6). We also emphasized the value added in this paper in the Results section.

 

>Second, the authors claim to have developed a harvesting robot based on two stationary Sony cameras. Nonetheless, in lines 223 and 224 it is mentioned that the standard YOLO-v3 algorithm used in this work has been trained on the COCO dataset images. In this reviewers opinion, before presenting any results, the document should clearly introduce all datasets that have been used with the important details mentioned, such as the total number of images available from each dataset, how many have been used to train the algorithm and how many were used for testing. This should all be stated in some sort of table right before presenting the results. Some other information such as the number of apples available per image (as average) and/or in the entire dataset could also be interesting to know. As one of the appeals of the work carried out is the fact that is going to be used in an actual robot, this should be appropriately emphasized in the document. For this reviewer, out of all images presented in the document, it is still not clear which ones were taken with the Sony cameras available in the robot.

Response: We described the test dataset in the Image Acquisition subsection. The images were taken manually using Nikon cameras with specifications similar to Sony cameras that are installed in our robot. In the Results section, we extended the table by adding the average number of apples per image, Precision, Recall, F1, and IoU columns.

 

> Third, another section of the document should be how the algorithm has been parameterized. For instance, any deep learning approach uses an optimization algorithm to converge into a solution. This has not been mentioned, neither the parameters of such algorithm (lambda, ...). In the particular case of the YOLO, another parameter is the number of windows the image is divided into. In line 288, it is stated that images have been divided in 9 regions, but it is not mentioned if other configurations were used and if the results obtained were better or worst.

Response: We added a detailed description of parameters used in YOLOv3 in subsection 2.4. We also added details of dividing canopy view images into 9 regions.

 

> Finally, the presentation of the results has been poorly done. Just a single table with some FNR and FPR values does not let the reader get an intuition of how good the algorithm performs. Absolute values are missing, such as how many apples in total were correctly detected, how many were not detected. The Intersection Over Union has not been included either.

Response: In the Results section, we extended the table by adding an average number of apples per image, number of correctly detected apples, number of not detected apples, number of objects mistaken for apples, Precision, Recall, F1, and IoU columns. We also added additional details in the Discussion section.

 

Thank you very much again!

With best wishes,

the authors

Reviewer 2 Report

Authors tried to explain that by applying a few pre- and post-processing steps to YOLO-3 algorithm can be used for detecting the apples. They used existing data set of 878 images taken by VIM centre. It would be more interesting if this method is compared to the already reported improved YOLO-v3 models (article no. 65, 66 referred by authors). Including more metrics (F1-score, IoU, confusion matrix and P-R curve etc.) for evaluating the proposed method will strengthen the article.

 

Specific comments.

17-18: Is it possible to add a few real-time images containing oranges and tomatoes and test the modified algorithm to support this sentence?  

195: Use et al., or standard format for the reference.

230-232: Explain the methods in detail, for example, mask size applied.

240-241: Detail image acquisition details needed such as Image resolution, daylight conditions and camera angles, the growth stage of apples, etc.  

244-248: Example images for factors “Backlight” and factor “apple shade” not included. It is recommended to provide subsection for each “factor” and explain.

278-281: Were the same cameras used for the data set that was used for evaluating the algorithm? Which camera used for general images and which camera used for close-up images?

288: Again what was the initial resolution and what is the final size of after diving into 9 regions.

322: Table 1. FNR and FPR values may be corrected (example 9.2% not 9,2%)

Author Response

Dear Reviewer,

Thank you for the review. It helped us to rework the paper significantly, and all your concerns and suggestions helped us to understand our results much better.

 

> Authors tried to explain that by applying a few pre- and post-processing steps to YOLO-3 algorithm can be used for detecting the apples. They used existing data set of 878 images taken by VIM centre. It would be more interesting if this method is compared to the already reported improved YOLO-v3 models (article no. 65, 66 referred by authors). Including more metrics (F1-score, IoU, confusion matrix and P-R curve etc.) for evaluating the proposed method will strengthen the article.

Response: The results are compared with [65, 66], and more metrics are calculated, including Precision, Recall, F1, and IoU.

 

>17-18: Is it possible to add a few real-time images containing oranges and tomatoes and test the modified algorithm to support this sentence?  

Response: Some images of orange and tomato detection were added and discussed in the Discussion section

 

>230-232: Explain the methods in detail, for example, mask size applied.

Response: We restructured the paper by adding special subsections for Image Acquisition (2.2), Apple Detection Quality Evaluation (2.3), using YOLOv3 without pre- and post-processing (2.4), and provided details of pre- and post-processing techniques (2.5, 2.6). We also added a detailed description of parameters used in YOLOv3 in subsection 2.4, and details of dividing canopy view images into 9 regions.

 

>240-241: Detail image acquisition details needed such as Image resolution, daylight conditions and camera angles, the growth stage of apples, etc.  

Responce: We added these details in the Image Acquisition subsection (2.2).

 

>244-248: Example images for factors “Backlight” and factor “apple shade” not included. It is recommended to provide subsection for each “factor” and explain.

Response: We added these details in the 2.5 subsection.

 

>278-281: Were the same cameras used for the data set that was used for evaluating the algorithm? Which camera used for general images and which camera used for close-up images?

Response: The camera specifications are provided in the Image Acquisition subsection (2.2). Image acquisition was conducted using Nikon D3500 AF-S 18-140 VR cameras equipped with Nikon Nikkor AF-P DX F 18-55 mm lenses.

 

>288: Again what was the initial resolution and what is the final size of after diving into 9 regions.

Response: Original images were taken with different resolutions (3888 × 5184, 2528 × 4512, 3008 × 4512, 5184 × 3888) and then resized to 416 × 416  for YOLOv3. The details are added in the corresponding sections.

 

>322: Table 1. FNR and FPR values may be corrected (example 9.2% not 9,2%)

Response: Corrections were made.

 

Thank you very much again!

 

With best wishes,

the authors.

Reviewer 3 Report

The paper titled “Using YOLOv3 algorithms with pre and post processing for apple detection in fruit harvesting robot” suggest that certain alterations to the images such as slight blurring and histogram alignment increases the ability of YOLOv3 to detect apples in orchard conditions.

The paper provides an overview of current methods used for fruit detection, to justify the choice of YOLOv3 as a cutting edge algorithm, followed by description of the alterations suggested and detection results. The work is an important contribution to the general attempts of the research community in developing automation in agriculture.

I have two main concerns with this paper. First the methodology section practically does not exist. It is very clear that a lot of work was put in into this research, into the data acquisition and processing of the results, but unfortunately I found almost no details to be able to asses the work. Second, as a results of the lack of methodology it is not clear if indeed this proposition works better then standard YOLOv3 or other methods. The comparison to YOLOv3 is basically written off in one sentence (Line 227) with no commutative comparison. The comparison to other methods is done through comparison to other published results, and given the slight increase in detection rates this could be dependent on the data-set acquired.

I suggest addressing these concerns through the following alterations:

1. Description of the data collection. There are a few details scattered (e.g., the cameras used and the fact that there are “close up” and “general” images) but this, in my opinion, needs to be systematic description (preferably a section/subsection devoted to it). The description needs to include details such as what parts of the orchard were sampled how they were sampled (manually? With the robotic arm described? How were they centered?), etc. What is a general and what is a closeup image?

2. Description of the ground truth (GT) collection procedure. The authors claim (L318) that the FP and FN are respectively real apples collected. Does that mean the results were compared to the actual apples measured in the field (e.g. GT) or apples labeled by a human annotator from the images (labeling)? How was that labeling or GT data collected? The authors correctly state that measures published in the literature often omit IOU, FP, FN and are not related to the real field. If the authors would provide the details compared to the real field this would be a great enhancement compared to today’s literature.

3. Evaluation procedure description. Given labels(?) of the actual data and the detection, what was the threshold to define the detection is correct/incorrect? What happens if there are multiple detentions of the same object? What if it is split? What about the “bunch” (maybe cluster is a better term?)?

4. Evaluation compared to YOLOv3. Given that the authors claims their enhancement is better then YOLOv3, it would be beneficial to present the results of “classic” YOLOv3.

5. Evaluation on a different dataset. This one might be too tricky, but it would be beneficial to run the suggested algorithm on one of the data sets available and published (the authors referred to DeepFruits and Fruits 360 for example). It would be a good “bonus” to have this, but I wouldn’t say it is critical given the size of the database reported.

6. The discussion over detection timings decrease are rather problematic. First, the authors provide an estimate of processing times (L306-307) but don’t provide the hardware on which it was analyzed. Second the affect of decrease in detection of several ms (even if it % it is a significant one) on a systems that runs at several second per fruit, and the main bottlenecks are the mechanical manipulations needs to be discussed. Specifically in L163 the authors cite a publication on Kiwi detection where they claim it takes about 5s to harvest a Kiwi with a “main time was occupied by the neural network operation”. I am not familiar with the work but this sounds rather odd given the few ms speeds, maybe a verification is required of that claim?

Given these main point a few styling suggestions (though this might be a personal preference):

1. The overviewed literature is very vast. Given that the methods suggested is YOLOv3 (or ML) a short references to other methods is in my opinion sufficient. Such an in-depth analysis of the current literature would be more suitable for a review paper. The description of general ML methods (eg., AlexNet VGG and others), out of context of fruit detection are also more suitable for a review paper (e.g., L170-L174, L150-L157)

2. On the other hand some aspects of detection were not addressed in depth. For example the authors discuss the lightning issues and claim that it is a color space problem while many other methods were proposed (e.g, adaptive tresholding Zemmour, et al. "Dynamic thresholding algorithm for robotic apple detection." 2017) and they were found to have quit little affect on DNN detection (Arad,, et al. "Controlled Lighting and Illumination-Independent Target Detection for Real-Time Cost-Efficient Applications. The Case Study of Sweet Pepper Robotic Harvesting. 2019). Also since the authors are disucssion the differences between detection from up-close and from distance you might want to look into viewpoint analysis papers such as “Vitzrabin et al., Changing task objectives for improved sweet pepper detection for robotic harvesting. 2016), and several papers by Bulanon and Alchanatis (сited by the authors) discussing viewpoint affect on detection (e.g., Bulanon, D. M., T. F. Burks, and V. Alchanatis. "Fruit visibility analysis for robotic citrus harvesting.", 2009; Hemming, et al. "Fruit detectability analysis for different camera positions in sweet-pepper." 2014; Kurtser, Polina, and Yael Edan. "Statistical models for fruit detectability: Spatial and temporal analyses of sweet peppers."(2018)”.

Just to be clear, I am not suggestion you cite all of these papers, given the already large number of papers cited, and given that these are the publication I am aware of and there must be many more out there. But these are aspects to be looked into when deciding which aspects of computer vision should be included and which maybe omitted when discussion fruit detection.

3. Some claims made by the authors are unsupported by literature or other justification. For example L118:”detecting fruits by texture...works very poorly in backlight”, or L41-44: “But computer vision systems based on these models in existing prototypes of harvesting robots work too slowly...”. Which ones? The recently published robotic harvesting I am aware of (e.g., Arad,, et al. "Development of a sweet pepper harvesting robot.", (2020); Bac, et al. "Harvesting robots for high value crops: State of the art review and challenges ahead."(2014): 888-911.) waste most of their times on mechanical operations and not on detection. (Kurtser, et al.,. "The use of dynamic sensing strategies to improve detection for a pepper harvesting robot." 2018)

Finally the English and style of the paper is very well written, the only suggestion I have for the authors is maybe to refrain from a single sentence paragraphs to make it easier to reed.

In conclusion in my opinion, it is an important and valuable work, but more writing and description work is needed in my opinion to comply with the journal publication quality.

Author Response

Dear Reviewer,

Thank you for the review. It helped us to rework the paper significantly, and all your concerns and suggestions helped us to understand our results much better.

 

>1. Description of the data collection. There are a few details scattered (e.g., the cameras used and the fact that there are “close up” and “general” images) but this, in my opinion, needs to be systematic description (preferably a section/subsection devoted to it). The description needs to include details such as what parts of the orchard were sampled how they were sampled (manually? With the robotic arm described? How were they centered?), etc. What is a general and what is a close-up image?

Response: We added these details in the Image Acquisition subsection (2.2).

 

 

>2. Description of the ground truth (GT) collection procedure. The authors claim (L318) that the FP and FN are respectively real apples collected. Does that mean the results were compared to the actual apples measured in the field (e.g. GT) or apples labeled by a human annotator from the images (labeling)? How was that labeling or GT data collected? The authors correctly state that measures published in the literature often omit IOU, FP, FN and are not related to the real field. If the authors would provide the details compared to the real field this would be a great enhancement compared to today’s literature.

Response: The apple detection results were compared to the actual apples labeled by the authors in the images.

In the Results section, we extended the table by adding average number of apples per image, number of correctly detected apples, number of not detected apples, number of objects mistaken for apples, Precision, Recall, F1, and IoU. We also added additional details in the Discussion section.

 

>3. Evaluation procedure description. Given labels(?) of the actual data and the detection, what was the threshold to define the detection is correct/incorrect? What happens if there are multiple detentions of the same object? What if it is split? What about the “bunch” (maybe cluster is a better term?)?

Response: In general, the system proposed recognizes both red and green apples quite accurately. The system detects apples that are blocked by leaves and branches, green apples on a green background, darkened apples, etc.

Manual evaluation of the results has shown that there were no multiple detections of the same apple. There also were no splits when one box bounds one part of an apple, and another box bound different part of the same apple.

The most frequent case, when not all the apples are detected, is when apples form clusters (Fig. 11). This is not significant for the robot, since at each step, the manipulator takes out only one apple, and the number of apples in the cluster decreases.

It should be noted that this problem arises only when analyzing canopy view images presenting several trees with apples. When analyzing images taken in close-up by the camera located on the robot arm, this problem does not occur.

We added these details in the Results section.

 

>4. Evaluation compared to YOLOv3. Given that the authors claims their enhancement is better then YOLOv3, it would be beneficial to present the results of “classic” YOLOv3.

Response: The paper aims to show that YOLOv3 algorithm could be used in harvesting robots in order to detect apples in orchards effectively. If we apply this algorithm directly to images taken in real orchards, the detection quality is quite poor. However, image preprocessing before applying YOLOv3 helps to increase fruit detection rate from 10% to 91%.

The results of classical YOLOv3 are presented in the Using YOLOv3 without Pre- and Post-processing for Apple Detection subsection (2.4). They are also mentioned in the Results section.

 

5. Evaluation on a different dataset. This one might be too tricky, but it would be beneficial to run the suggested algorithm on one of the data sets available and published (the authors referred to DeepFruits and Fruits 360 for example). It would be a good “bonus” to have this, but I wouldn’t say it is critical given the size of the database reported.

Response: YOLOv3 detects apples on Fruits 360 images quite well with 98% apple detection rate, but these are specially prepared images of apples shoot with good lighting and with the removed background. These images much differ from real apples in orchards. There are also 103 multiple fruit images in Fruits360. Most of these images shoot not in orchards, but they have a background. There are less than 10 apple images in these multiple fruits images, but the results of apple detection are the same as on our dataset.

We also tried ACFR Orchard Fruit Dataset, and the results were quite poor due to the inferior quality of images.

>6. The discussion over detection timings decrease are rather problematic. First, the authors provide an estimate of processing times (L306-307) but don’t provide the hardware on which it was analyzed. Second the affect of decrease in detection of several ms (even if it % it is a significant one) on a systems that runs at several second per fruit, and the main bottlenecks are the mechanical manipulations needs to be discussed. Specifically in L163 the authors cite a publication on Kiwi detection where they claim it takes about 5s to harvest a Kiwi with a “main time was occupied by the neural network operation”. I am not familiar with the work but this sounds rather odd given the few ms speeds, maybe a verification is required of that claim?

Response: The hardware details were provided, but we removed all the rest of the speed discussion.

 

>1. The overviewed literature is very vast. Given that the methods suggested is YOLOv3 (or ML) a short references to other methods is in my opinion sufficient. Such an in-depth analysis of the current literature would be more suitable for a review paper. The description of general ML methods (eg., AlexNet VGG and others), out of context of fruit detection are also more suitable for a review paper (e.g., L170-L174, L150-L157)

Response: We excluded some out of context text

 

>2. On the other hand some aspects of detection were not addressed in depth. For example the authors discuss the lightning issues and claim that it is a color space problem while many other methods were proposed (e.g, adaptive tresholding Zemmour, et al. "Dynamic thresholding algorithm for robotic apple detection." 2017) and they were found to have quit little affect on DNN detection (Arad,, et al. "Controlled Lighting and Illumination-Independent Target Detection for Real-Time Cost-Efficient Applications. The Case Study of Sweet Pepper Robotic Harvesting. 2019). Also since the authors are disucssion the differences between detection from up-close and from distance you might want to look into viewpoint analysis papers such as “Vitzrabin et al., Changing task objectives for improved sweet pepper detection for robotic harvesting. 2016), and several papers by Bulanon and Alchanatis (сited by the authors) discussing viewpoint affect on detection (e.g., Bulanon, D. M., T. F. Burks, and V. Alchanatis. "Fruit visibility analysis for robotic citrus harvesting.", 2009; Hemming, et al. "Fruit detectability analysis for different camera positions in sweet-pepper." 2014; Kurtser, Polina, and Yael Edan. "Statistical models for fruit detectability: Spatial and temporal analyses of sweet peppers."(2018)”.

Response: In this paper, we did not really concentrate on camera positioning. We just tried different shooting distances in order to obtain close-up images and far-view canopy images. Also, In order to obtain images under different natural light conditions, different camera angles were used. These details were added to the Image Acquisition subsection.

 

>3. Some claims made by the authors are unsupported by literature or other justification. For example L118:”detecting fruits by texture...works very poorly in backlight”, or L41-44: “But computer vision systems based on these models in existing prototypes of harvesting robots work too slowly...”. Which ones? The recently published robotic harvesting I am aware of (e.g., Arad,, et al. "Development of a sweet pepper harvesting robot.", (2020); Bac, et al. "Harvesting robots for high value crops: State of the art review and challenges ahead."(2014): 888-911.) waste most of their times on mechanical operations and not on detection. (Kurtser, et al.,. "The use of dynamic sensing strategies to improve detection for a pepper harvesting robot." 2018)

Response: We removed all the claims unsupported by justifications, including all you mentioned.

 

>3. Some claims made by the authors are unsupported by literature or other justification. For example L118:”detecting fruits by texture...works very poorly in backlight”, or L41-44: “But computer vision systems based on these models in existing prototypes of harvesting robots work too slowly...”. Which ones? The recently published robotic harvesting I am aware of (e.g., Arad,, et al. "Development of a sweet pepper harvesting robot.", (2020); Bac, et al. "Harvesting robots for high value crops: State of the art review and challenges ahead."(2014): 888-911.) waste most of their times on mechanical operations and not on detection. (Kurtser, et al.,. "The use of dynamic sensing strategies to improve detection for a pepper harvesting robot." 2018)

Response: We removed all claims not supported by justifications, including all you mentioned.

>Finally the English and style of the paper is very well written, the only suggestion I have for the authors is maybe to refrain from a single sentence paragraphs to make it easier to reed.

Response: We reduced the number of one-sentence paragraphs significantly.

 

Thank you very much again!

With best wishes,

the authors

Back to TopTop