Progressive Training Technique with Weak-Label Boosting for Fine-Grained Classification on Unbalanced Training Data
Round 1
Reviewer 1 Report
Generally, overall this paper gives a significant impact on the area of the research study. The authors want to introduce an instance-aware hard ID mining strategy in the classification loss and then further develop the global and local feature-mapping loss to expand the decision margin for imbalanced training data. Though, I have several remarks which may improve the paper:
- I recommend the author to add organization of your paper by section in section 1 to assist the reader.
- Section 4, line 358. I recommended that the author separate the experiments and discussion section. This can give an impact the significant and overall contribution of this paper.
- In section 4, what is the objective to be achieved in this research? Explain. It is mandatory to understand the discussion and conclusion.
- In section 4.2, the author mentioned “Results reported are top-1 accuracy for CUB-2011 and Cars-196. For the competition, 384 the models are evaluated according to the mean average precision (MAP)@5.” However, the results of using MAP does not been clearly discussed in this section. The author only used accuracy for evaluating the model performance. I recommended author to compare more metrics such as precision, recall, f1 score, specificity, or sensitivity to your imbalanced data for verification.
- In section 1, the author mentioned, “to take full advantage of weak IDs, we propose a weak-label boosting algorithm”. However, the algorithms proposed does not clearly discuss in section 3.
- In conclusion, the author mentioned, “With our model and some tricks discussed in this paper, we won first place in the Kaggle challenge, which is a very difficult fine-grained analysis problem with unbalanced training data”. I think the author does not need to mention this model won first place….. However, I recommend the author add on future directions for research derived from limitations. This can give an impact the significant and overall contribution of this paper.
Author Response
Please see the attachment.
Author Response File: Author Response.docx
Reviewer 2 Report
This paper proposed a training technique for fine-grained classification tasks based on weak-label boosting on an unbalanced data set. The paper has great academic merit and provides insight into the engineering aspects of putting together different approaches for a Kaggle classification competition. The progressive training on partial data, then partially fixing the network, and finally training by including more data with unreliable IDs or few-shot samples is not entirely novel. However, with the detailed explanation of several design decisions like weak-label boosting, the whole approach provides a valuable resource for other researchers and ML practitioners.
I have several remarks:
1. Could you explain in more detail why the sigmoid-based loss leads to a more accurate threshold?
2. Why do you need data augmentation with speckle noise? What’s the rationale for this?
Minor remarks:
- The text requires some style and grammar check. Please avoid overusing “very”. It is an intensifier without an inherent meaning.
Author Response
Please see the attachment.
Author Response File: Author Response.docx
Reviewer 3 Report
In my opinion, this paper, "Progressive Training Technique with Weak-Label Boosting for Fine-Grained Classification on Unbalanced Training Data" presented the right approach for progressive training technique with weak-label boosting to 542 take full advantage of the few-shot IDs and weak IDs. They also introduce an instance-aware hard 543 ID mining strategy while designing a new classification loss to expand the decision margin. 544 With our model and some tricks discussed in this resaerch, they won first place in the Kaggle 545 challenge, which is a very difficult fine-grained analysis problem with unbalanced training 546 data.
I have no serious comments on the submitted manuscript, but with not so many equations, I recommend adding Nomenclature. I would suggest that a careful review must be taken of the text to improve the technical quality of the paper.
Author Response
Please see the attachment.
Author Response File: Author Response.docx