Next Article in Journal
Additive Manufacturing of Novel Hybrid Monolithic Ceramic Substrates
Previous Article in Journal
Spaceborne Atom-Interferometry Gravity Gradiometry Design towards Future Satellite Gradiometric Missions
 
 
Article
Peer-Review Record

SpaceDrones 2.0—Hardware-in-the-Loop Simulation and Validation for Orbital and Deep Space Computer Vision and Machine Learning Tasking Using Free-Flying Drone Platforms

Aerospace 2022, 9(5), 254; https://doi.org/10.3390/aerospace9050254
by Marco Peterson 1,2,*, Minzhen Du 1,*, Bryant Springle 1 and Jonathan Black 1,2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Aerospace 2022, 9(5), 254; https://doi.org/10.3390/aerospace9050254
Submission received: 1 March 2022 / Revised: 19 April 2022 / Accepted: 19 April 2022 / Published: 6 May 2022
(This article belongs to the Section Astronautics & Space Science)

Round 1

Reviewer 1 Report

This paper introduces a hardware-in-the-loop simulation environment testbed for computer vision tasks. Synthetic imagery and domain randomization method are used for CNN model training. This paper provides a good, detailed introduction to the research background, research motivations, and technologies implemented. However, there are multiple minor issues that needed to be addressed before publication.

Major issue:

  1. pp. 8, Figure 5: The output layer shows that the output classifications include spacecraft. However, all the results shown in the paper are talking about truss and solar panels. What is the performance of the system for spacecraft in general? Typically, truss and solar panels are more standardized in shape; while the size and shape of spacecraft can vary a lot. More comments, discussions, or results are needed to discuss the performance of the proposed method to handle more complicated objects in space, like spacecraft.

Minor issue:

  1. pp. 2, line 61: typo, reference cited showing as "?";
  2. pp. 7, line 217: cross-reference error, a link is shown by mistake;
  3. pp. 13, figure 13: typo, in the Caption, "qqqq";
  4. pp. 14, line 408: typo, "0.26" should be "0.26m"?
  5. pp. 15, format issue for figure 17;
  6. pp. 19, line 496: typo, "deadset";
  7. pp. 21, line 543: Please consider using the same format to show the results as previous sections (decimals instead of percent), "56%", "43%"; The similar issue also appears in pp. 20, line 507, where the result is "96 percent".
  8. pp. 24, line 590: typo, "per from";

Author Response

Good Afternoon.

We have corrected the grammatical and formatting issues and concerns.

pp. 8, Figure 5 - With regards to your question about performing computer vision on trusses and solar panels vs Spacecraft.

We have actually started preliminary data collection for this, and classifying and localizing spacecraft like the space shuttle is returning relatively high mean average precisions. However, we wanted to keep the number of label classes for this paper relatively low, to evaluate the effects of synthetic imagery and domain randomization. The number of objects in the next rendition of this paper will be dramatically increased to include more space assets, but this will also require several thousand more images for each object that will need to be labeled. 

Reviewer 2 Report

This paper dicusses the applications of mechine learning in future space missions such as self assemble of spacecraft. Authors also adds the so-called space-drone  to demonstrate their ideas. The idea is interesting, but I don't think the paper is publishable at the current stage. In the following detail my comments:

  1. The paper needs extensive edition of English. Many English errors occur through out the whole paper. Authors are suggested to consult a native speaker to refine their paper. For example: 
    1. In Line 72, there should be no comma after "including"
    2. In Line 73, "Navigation" should not be capitalized.
    3. In Line 73, the words starting from "however" should be a new sentence.
    4. In Line 84, the sentence starting from "Furthermore, ..." is grammatically error.
    5. In Line 87, "Orbital" should not be capitalized.
    6. In Line 98, "Billion" should not be capitalized.
    7. In Line 122, there should be a space between [24] and "is".
    8. There are very much more English errors in the paper. Please go through the whole paper carefully and correct them.
  2. One reference in Line 61 is missing.
  3. The webpage link in Line 217 goes out of the boundary of the paper.
  4. In Line 237, AP is used, but it is not defined until Eq. (3)
  5. In Line 272, "Fig." is used to reference the figure but "figure" is used in other places. Please make the reference consistent.
  6. In Line 282, the notation "mAP(0.5)" and "mAP(0.5:0.95)" are used without definition.
  7. In Line 308, an equation number should be placed inside a pair of parenthesis.
  8. In Line 298, $mAP_{50}$ is used without definition. I suppose that it is the same as mAP(0.5). Please make the notation consistent.
  9. Equations (1) and (2) define 4 indices TP, NP, TF, and NF. However, they are not used after being defined. Is it necessary to define these indices in this work?
  10. It seems that authors intend to introduce true-false analysis in Fig. 25. However, I didn't see the explaination clearly.
  11. In Fig. 32, performance using CNN is compared. However, authors also mention YOLO algorithm in previous sections. Why is YOLO disappear at the end?
  12. I suggest authors focus on the object detection using machine learning in this paper. The experiment of drone is totally irrelevant to the investigated topic. Although authors want to demonstrate how they integrate the sensing capability and the robots, their way to present is quite misguiding. The dynamics of earth drones is quite different frome space drone. The successful implementation in earth drones is not inferrable to the succesful implementation in space drones. The most important of all, authors can demonstrate their capablility of identifying objects without these misguiding experiments.
  13. At last, a short question for authors: in the examples provided in the paper, the objects are very bright and colorful. However, if one has seen a practical photo, the objects are usually gloomy and dim. How is the performance of the current algorithm to the practical scenes?

Author Response

Good Afternoon.

We have corrected the grammatical and formatting issues and concerns.

4) Average Precision (AP) is used to define Mean Average Precision (mAP) in the very next equation.

5) all Figures are now labeled in a consistent manner

6) moved definition for "mAP(0.5)" and "mAP(0.5:0.95)"

9) TP, NP, TF, and NF are used to define Precision and Recall, which are then used to Define the F1 curve. Formula for the F1 curve added to the paper.

10) True-false analysis using the confusion matrix better explained

11) Yolo added to CNN Summery Figure

12) The authors are aware of the difference in dynamics between atmospheric propeller-powered drone systems and floating orbital platforms. However, a drone platform offers four uncoupled and unrestricted degrees of freedom for motion simulation which is one more than a traditional air bearing table as used in the past by organizations such as NASA for the spheres program \cite{SPHERES} or IEEE's Formation Control Testbed (FCT). Future work will incorporate full uncoupled 6DOF motion via an omnidirectional drone with relative motion PID (proportional–integral–derivative) controllers governing omnidirectional drone motion detailed in “Omni-Drone” Figure in the future works section.

13) Added a short description and Figure illustration of what happens when an optical event causes a camera sensor to no longer operate because of lighting, under “Sensor Limitations”

Reviewer 3 Report

Summary:

This is a nice manuscript on the very active issue of integrating computer vision (via CNN) and robotics to generate synthetic environments for uses such as in-space-assembly, or other with servicing requirements (not just in the space arena). The research touches three key aspects of this type of orbit-operations (object tracking via synthetic data, increasing CV performance through domain randomization, and using neural-network-trained models for testing using hardware-in-the-loop). Three different testbeds were issues for these purposes, achieving promising results, using standard quantification (F1, precision, recall, confusion matrixes) .

 

Broad comments:

Strengths:

  1. In general, the research is focused on an interesting topic, very common in the aerospace industry: the problem of implementing control over assembly operations using space manipulators, via computer vision.
  2. The document is well organized, and the ideas are explained quite didactically; it would be easy to follow by a reader that is not involved in this kind of topics.
  3. The results claimed by the authors are indeed promising, and very interesting testbeds were issued for this project.

Weaknesses:

  1. The main concern is with regards the novelty for the general public; it should be generalized so that it does not seem as a “use-case solution” for an specific case. 
  2. Another concern is related to the repeatability of the research: authors mention “the Python script” used for detecting, and its modification to address the needs, but no extra information can be found in the manuscript. Readers would need a detailed explanation.
  3. The scientific soundness of the paper should be improved: the only equations in the document are well-known Precision, Recall, AP and mAP definitions.
  4. A comparison between the choice for image recognition (YOLOv5), and other alternatives would be appreciated.
  5. The results seem promising but should be elaborated some more to reflect the importance of the research.

 

Specific comments:

  1. Major issues:
    1. Please, summarize the Abstract, and add the achievements with respect to the literature.
    2. Please, add a summary of the results in the Conclusions section.
    3. The manuscript seems to have been written quickly; please, take the time to write it, so that is shows the important effort authors have done behind.
  2. Minor issues:
    1. Reference #6 is a null pointer (Reference section).
    2. Reference [?] (line 61) between [12] and [13]. Please, fix it.
    3. Line 217: please fix the URL (overleaf….)
    4. Please, explain reference [5] in line 219; same as the one in line 34?.
    5. References {42, …, 48, 50}, please, need to be fixed.
    6. Trailing “..” in line 320.
    7. Figure 13: trailing “qqqqqq”
    8. Figure 17 (page 15): please, fix (I cannot see it in the PDF version).
    9. Figure 16: please, add a higher quality image.
    10. Line 471: there is a “Results” that I think should not be there.
    11. Figure 27: “None”; please, explain.
    12. Line 302: “amd”; please, fix.
    13. Section 11.1: please, provide a table with the results, so that they can be more readable.
    14. Paragraph at line 547: please, rewrite so that it can be readable.
    15. Figure 32: “Summery”; please, fix.
    16. As a summary, please, read the document carefully and take your time to write it properly.

Comments for author File: Comments.pdf

Author Response

Weaknesses:

1) Illustrated how this work could and should be generalized in the last sentences of the conclusion section

“Figure [40] compares the capabilities of the current SpaceDrones Architectures with recent literature, detailing the integration of many studies and capabilities to solve a critical need within the space industry. This system will only continue to improve as capabilities detailed below in the future work sections are brought online. However, the problem of hardware-in-the-loop testing for any number of parameters or environmental factors is not specific to the aerospace community. This simulation solution can and should be applied to any number of research, search and rescue, defense, and general automation applications.”

2) This project is very much designed to be repeatable. Added a Figure detailing software dependencies “including the mentioned python script” to make it for readers to understand and reproduce. Also explain how that scripting is used.

3) Added the Rotation matrices for internal reference frames between the drone, camera, and robotic arm to make object localization and capture possible. Added the Jacobian iterative inverse kinematics equations governing the motion of the robotic arm after localization has been achieved.

4) We need a CNN that would before well on an edge computing device. This constraint narrowed down options that would realistically deploy for this problem set down to YOLO as the CNN of choice. Additionally, integrating the Code of more than one CNN into the SpaceDrone flight and controls APIs would have been time-consuming.

5) Added a figure at the beginning of the paper illustrating how orbital maintenance and servicing have been conducted in the past using astronaut EVAs vs the advantage of offloading those tasks to autonomous robotics. We have also articulated this more clearly in the discussion and conclusions sections.

Major Issues:

1 and 2) Better summarized work in the Discussion and Conclusion sections, and added a visual chart in the conclusion section to graphically depict capabilities compared to other literature.

3) We have corrected the grammatical and formatting issues and concerns.

Minor Issues:

13) Table with Synthetic Imagery and Domain Randomization Results added

Round 2

Reviewer 2 Report

The revised version of the paper clarifies my questions for the preivous version and corrects all errors. The paper looks fine as a whole. I would recommend the paper publish in the journal after some minor problems corrected:

  1. They symbol "*" is used to denote multification in many equations, such as Eq. (3). However, in mathematics "*" usually reserves for "convolution of two functions". As for multification, we usually use the bullet dot sign, "•". Hence, either changing the sign to bullet dot or defining "*" as multiplication operator below Eq. (3) is suggested.
  2. The caption of tables is usually placed on top of the table. Hence, the caption of Table 1 is suggested to move to the top.

Author Response

1) Mathematical multiplication "*" changed to "dots"

2) Table Captions moved from the bottom to the top

3) Two new figures added (Figure [3] and Figure [4]) in an attempt to clarify the "methods descriptions could be improved" mark.

Reviewer 3 Report

As I can see in the updated version of you manuscript, the isssues found in the previous version have been addressed. Thank you very much, since I strongly believe they have improved its quality

Author Response

Two new figures add (Figure [3] and Figure [4] in an attempt to clarify the "methods descriptions could be improved" mark.

Back to TopTop