Next Article in Journal
The Augmented Weak Sharpness of Solution Sets in Equilibrium Problems
Next Article in Special Issue
Code Comments: A Way of Identifying Similarities in the Source Code
Previous Article in Journal
A Copula-Based Bivariate Composite Model for Modelling Claim Costs
Previous Article in Special Issue
Investigating Effective Geometric Transformation for Image Augmentation to Improve Static Hand Gestures with a Pre-Trained Convolutional Neural Network
 
 
Article
Peer-Review Record

Automated Classification of Agricultural Species through Parallel Artificial Multiple Intelligence System–Ensemble Deep Learning

Mathematics 2024, 12(2), 351; https://doi.org/10.3390/math12020351
by Keartisak Sriprateep 1, Surajet Khonjun 2,*, Paulina Golinska-Dawson 3, Rapeepan Pitakaso 2, Peerawat Luesak 4, Thanatkij Srichok 2, Somphop Chiaranai 2, Sarayut Gonwirat 5 and Budsaba Buakum 6
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Reviewer 5: Anonymous
Mathematics 2024, 12(2), 351; https://doi.org/10.3390/math12020351
Submission received: 20 December 2023 / Revised: 15 January 2024 / Accepted: 20 January 2024 / Published: 22 January 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1. Need  to explain Eq. 17 elaborately.

2. Need to draw the work flow diagram of the working methodology.

3. Need to write the algorithm of the defined methodology.

4. Figure 3 , need to explain U-Net elaborately.

Comments on the Quality of English Language

Minor edition is required.

Author Response

Answer Reviewer Mathematics R1

Reviewer 1:

  1. Need to explain Eq. 17 elaborately.

Answer : Thank you for your valuable feedback regarding Equation (17) in our research paper. We appreciate your suggestion for a more elaborate explanation.

 

“Equation (17) serves as a probability function that guides the selection of 'IB' de-noted as 'b' during iteration 't.' This equation derives from four distinct components, each drawing from historical data related to the performance of various 'IBs.' These components collectively inform the likelihood of selecting 'IB' 'b' based on its past performance.

The first component quantifies how frequently 'IB' 'b' has been chosen in previous iterations. This metric reflects the popularity of 'IB' 'b' among the selection process and implies that more frequently selected 'IBs' might have the potential to yield superior solutions. The second component in Equation (17) calculates the average value of the 'objective function' associated with 'IB' 'b.' This component provides insights into the typical performance level of 'IB' 'b.'

The third component counts the instances where 'IB' 'b' has consistently outper-formed all other 'IBs' in the same iteration. This highlights 'IB' 'b's ability to consistently find the best solutions.The final component considers the difference between the av-erage solution value of 'IB' 'b' and the 'best IB.' It introduces an additional dimension to the evaluation process.

Once these components are combined in Equation (17) to compute the probability of selecting 'IB' 'b,' a roulette wheel selection method will be employed for the subse-quent step. This roulette wheel selection ensures that 'WP' (Worker Package) selects 'IB' based on their respective probabilities. Higher probabilities will correspond to a greater chance of selection, favoring 'IBs' that have consistently demonstrated superior per-formance in comparison to others. In summary, Equation (17) and the roulette wheel selection method work in tandem to choose the most promising 'IB' based on their historical performance, fostering a dynamic and adaptive selection process.”

We hope this more detailed explanation clarifies the role and significance of Equation (17) in our research. If you have any further questions or require additional clarification, please do not hesitate to ask.

Thank you once again for your valuable input.

 

  1. Need to draw the work flow diagram of the working methodology.

Answer : Thank you for your valuable feedback. We have taken your suggestion into consideration and have included a detailed workflow diagram (Figure 2) in the manuscript to illustrate the working methodology of our proposed approach. We believe that this addition will enhance the clarity and comprehensibility of our research for readers. Please feel free to review the updated manuscript, and if you have any further comments or suggestions, we would greatly appreciate your input. Thank you for your time and consideration.

 

  1. Need to write the algorithm of the defined methodology.

Answer : We appreciate the reviewer's feedback and have addressed the concern by including the algorithm for our defined methodology in Figure 5 of the manuscript. The algorithm provides a detailed and systematic representation of our proposed method, allowing readers to gain a comprehensive understanding of our approach. We believe that this addition enhances the clarity and completeness of our paper.

 

  1. Figure 3 , need to explain U-Net elaborately.

Answer : Thank you for your comment regarding Figure 3. We appreciate your interest in understanding the details of our methodology. We have addressed your concern by providing an elaborate explanation of the utilization of U-Net and its synergy with Mask R-CNN in Figure 4.

Figure 4 in our study aims to elucidate the integration of U-Net and Mask R-CNN for Centella Asiatica Urban (CAU) leaf image segmentation. U-Net's encoder-decoder architecture excels in handling images with sparse data, a common occurrence in medical image analysis. The incorporation of skip connections in U-Net facilitates the recovery of spatial information, making it versatile and efficient for various segmentation tasks. On the other hand, Mask R-CNN, an advanced version of Faster R-CNN, includes a dedicated branch for predicting segmentation masks alongside object detection. This dual functionality ensures highly detailed pixel-level segmentation, encompassing bounding boxes, class labels, and masks.

The fusion of U-Net and Mask R-CNN, as illustrated in Figure 4, leverages the unique strengths of each method, resulting in a comprehensive solution for CAU leaf segmentation. Our approach employs an Artificial Multiple Intelligence System (AMIS) to merge the outputs of both segmentation techniques, significantly enhancing the precision of cultivar classification.

In the Parallel-Artificial Multiple-Intelligence System-Ensemble (P-AMIS-E) model, the process begins with the initial image, aimed at identifying different CAU cultivars. This image undergoes dual segmentation using U-Net and Mask R-CNN, and the results are integrated within the AMIS ensemble framework. After segmentation, geometric image augmentation is applied to prepare the segmented images for analysis by four distinct CNN architectures: SqueezeNet, ShuffleNetv2 1.0x, MobileNetV3, and InceptionV1. Each CNN architecture contributes a unique perspective, resulting in four separate predictions in each iteration. The P-AMIS-E model then synthesizes these individual insights into a cohesive prediction using the AMIS framework.

We hope this explanation clarifies the role of U-Net and Mask R-CNN in our methodology and their integration into the P-AMIS-E model. If you have any further questions or require additional information, please feel free to let us know. Thank you for your valuable feedback.

 

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

The abstract and conclusion of the paper need to provide quantitative numerical values.

The author needs to consider the impact of large models on the research in this article.

The method of this paper should be compared with Vision Transformer in reference at https://www.sciencedirect.com/science/article/pii/S2214317323000173

 

Comments on the Quality of English Language

Moderate editing of English language required

Author Response

Reviewer 2:

1.The abstract and conclusion of the paper need to provide quantitative numerical values.

Answer : Thank you for your valuable feedback. We have carefully reviewed your suggestion, and we appreciate your attention to detail. In response to your comment, we have made the necessary modifications to the abstract and conclusion of the paper, incorporating quantitative numerical values to provide a more comprehensive and data-driven summary of our research findings.We believe that these enhancements will enhance the clarity and effectiveness of our paper, enabling readers to gain a better understanding of the quantitative aspects of our study. Your feedback has been instrumental in improving the quality of our research, and we sincerely thank you for your constructive input.If you have any further comments or recommendations, please do not hesitate to share them with us. Your insights are invaluable in our continuous efforts to refine and enhance our research.Thank you once again for your time and attention.

Abstract :

“The classification of certain agricultural species poses a formidable challenge due to their inherent resemblance and the absence of dependable visual discriminators. The accurate identification of these plants holds substantial importance in industries such as cosmetics, pharmaceuticals, and herbal medicine, where optimization of essential compound yields and product quality is paramount. In response to this challenge, we have devised an automated classification system based on deep learning principles, designed to achieve precision and efficiency in species classification.Our approach leverages a diverse dataset encompassing various cultivars and employs the Parallel-Artificial Multiple Intelligence System-Ensemble Deep Learning model (P-AMIS-E). This model integrates ensemble image segmentation techniques, including U-net and Mask-R-CNN, alongside image augmentation and convolutional neural network (CNN) architectures such as SqueezeNet, ShuffleNetv2 1.0x, MobileNetV3, and InceptionV1. The culmination of these elements results in the P-AMIS-E model, enhanced by an Artificial Multiple Intelligence System (AMIS) for decision fusion, ultimately achieving an impressive accuracy rate of 98.41%.This accuracy notably surpasses the performance of existing methods, such as ResNet-101 and Xception, which attain 93.74% accuracy on the testing dataset. Moreover, when applied to an unseen dataset, the P-AMIS-E model demonstrates a substantial advantage, yielding accuracy rates ranging from 4.45% to 31.16% higher than the compared methods. It is worth highlighting that our heterogeneous ensemble approach consistently outperforms both single large models and homogeneous ensemble methods, achieving an average improvement of 13.45%.This paper provides a case study focused on the Centella Asiatica Urban cultivar (CAU) to exemplify the practical application of our approach. By integrating image segmentation, augmentation, and decision fusion, we have significantly enhanced accuracy and efficiency. This research holds theoretical implications for the advancement of deep learning techniques in image classification tasks, while also offering practical benefits for industries reliant on precise species identification.”

Conclusion

“In this research endeavor, we have developed an innovative method for the precise differentiation and categorization of plant species, with a particular focus on herbs and plants exhibiting multiple varieties. These plant varieties possess distinct characteristics, varying quantities of essential compounds, and divergent cultivation requirements. The accurate and precise classification of plant varieties plays a pivotal role in optimizing cultivation, production planning, and quality control processes, ultimately enhancing efficiency in product development and management.

The classification of Centella Asiatica Urban (CAU) cultivars presents a formidable challenge due to the striking similarity between species and the absence of reliable visual features for differentiation. This challenge assumes paramount importance in industries such as pharmaceuticals, cosmetics, and herbal medicine, where precise species identification is vital to ensure product quality and safety. To address this intricate task, we conducted comprehensive research aimed at developing an automated classification system grounded in deep learning techniques, capable of accurately and efficiently classifying CAU species.

Our research methodology entailed the assembly of a diverse and extensive dataset comprising Centella Asiatica Urban (CAU) cultivars. Subsequently, we constructed a robust automated classification system employing the Parallel-Artificial Multiple Intelligence System-Ensemble Deep Learning model (P-AMIS-E). This sophisticated model integrated ensemble image segmentation techniques, specifically U-net and Mask-R-CNN, alongside image augmentation and an ensemble of convolutional neural network (CNN) architectures, including SqueezeNet, ShuffleNetv2 1.0x, MobileNetV3, and InceptionV1. The hallmark of our model was its utilization of the Artificial Multiple Intelligence System (AMIS) as a decision fusion strategy, significantly augmenting accuracy and efficiency.

Our results are emblematic of the model's remarkable performance. The P-AMIS-E model achieved an astounding accuracy rate of 98.41%, signifying a substantial advancement compared to state-of-the-art methods. Notably, existing methods such as ResNet-101, Xception, NASNet-A Mobile, and MobileNetV3-Large attained an accuracy rate of 93.74% on the testing dataset. Moreover, the P-AMIS-E model exhibited a substantial advantage when applied to an unseen dataset, yielding accuracy rates ranging from 4.45% to 31.16% higher than those achieved by the compared methods.

In summary, our research introduces a pioneering approach to the precise classification of plant species, with a particular emphasis on CAU cultivars. The integration of ensemble image segmentation techniques, image augmentation, and decision fusion within the deep learning framework has yielded remarkable improvements in accuracy and efficiency. These findings carry profound implications for the evolution of deep learning techniques in the realm of image classification. We recommend further exploration of the potential of ensemble deep learning models, coupled with continued investigations into the optimal amalgamation of image segmentation, image augmentation, and decision fusion strategies. Additionally, expanding the dataset to encompass a broader range of CAU species and exploring the potential of transfer learning to enhance the model's performance on new species represent promising avenues for future research.”

 

2.The author needs to consider the impact of large models on the research in this article.

Answer : Thank you for your insightful feedback and the valuable comment regarding the inclusion of the impact of large models in our research. We appreciate your guidance in enriching the depth and quality of our study.In response to your suggestion, we have carefully revised the "Discussion" section of our manuscript. We have incorporated a detailed analysis of the implications of using large deep learning models, particularly focusing on their computational demands, interpretability issues, and practical applicability. This addition aims to provide a balanced view, acknowledging both the strengths and the challenges associated with these models.We believe that this amendment enriches our discussion by addressing an important aspect of deep learning research. It highlights the trade-offs involved in employing large models and suggests avenues for future research, such as optimizing these models for reduced computational requirements or exploring hybrid approaches.We hope that this revision satisfactorily addresses your concerns and enhances the overall contribution of our work to the field. We are grateful for the opportunity to improve our manuscript and thank you once again for your constructive critique. The added text is as follows

“The employment of large models in deep learning, particularly in the context of agricultural species classification, presents a juxtaposition of computational complexity and enhanced performance. While these models, including the Parallel-Artificial Multiple Intelligence System-Ensemble (P-AMIS-E) utilized in our study, offer superior accuracy and sophisticated capabilities, their size and complexity warrant a thorough consideration of their impact on the research.

 

Firstly, large models typically require substantial computational resources for training and inference. This demand can pose challenges in terms of accessibility and feasibility, particularly in resource-constrained environments. In our study, while the P-AMIS-E model demonstrates high accuracy in classifying the Centella Asiatica Urban (CAU) cultivar, it inherently necessitates significant computational power, a factor that could limit its applicability in settings with limited technical infrastructure.

 

Moreover, the complexity of large models can also affect their interpretability and transparency. As these models become more intricate, understanding the rationale behind their decisions becomes increasingly challenging. This lack of transparency can be a crucial factor in fields where explainability is essential, such as in medical or pharmaceutical applications. In our research, we acknowledge this complexity and advocate for ongoing efforts to enhance the interpretability of such models without compromising their performance.

 

Despite these challenges, the benefits of large models in achieving high accuracy and handling complex tasks are undeniable. The P-AMIS-E model's capability to accurately classify the CAU cultivar is testament to the effectiveness of these models in handling nuanced and detailed tasks, which smaller models might not perform as efficiently. This effectiveness is particularly vital in the context of our research, where precision in classification directly influences the quality and efficacy of products in the cosmetics, pharmaceutical, and herbal medicine industries.

 

In conclusion, while the use of large models in our research offers significant advantages in terms of accuracy and capability, it is crucial to balance these benefits with considerations of computational demand, interpretability, and practical applicability. Future research directions might include optimizing these large models to reduce their computational footprint while maintaining their accuracy, or developing hybrid approaches that combine the strengths of both large and smaller models.

 “

  1. The method of this paper should be compared with Vision Transformerin reference at https://www.sciencedirect.com/science/article/pii/S2214317323000173

Answer : Thank you for your comment and for highlighting the need to compare our method with the Vision Transformer (ViT) approach from the reference you provided (https://www.sciencedirect.com/science/article/pii/S2214317323000173). We have indeed conducted a thorough comparison of our proposed method with ViT, and the results of this comparison have been presented in Tables 7 to 9 of our manuscript. These tables offer a comprehensive overview of the performance metrics, demonstrating how our method fares in comparison to ViT across various evaluation criteria. We trust that the inclusion of these tables sufficiently addresses your request for a comparative analysis between our approach and ViT. If you have any further questions or require additional information, please do not hesitate to let us know. Your feedback is greatly appreciated, and we are committed to ensuring the completeness of our research.

 

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

Comments
This paper is concerned with the automated classification of agricultural species through parallel-artificial multiple intelligence system-ensemble deep learning. The proposed method was shown to acheived a better performance than existing ones. The work sounds interesting, but the following issues should be well addressed:


1. The contribution should be summarized in a more detailed way. What are the major limitations of existing results? And how do you solve those problems.


2. CNN is not a new model, there are some new deep learning models, see, e.g., IEEE Transactions on Industrial Electronics, vol. 69, no. 12, pp. 13462-13472, 2022.

 What is your advantage? More discussions should be made as your problem is basically a classification problem.


3. The presentation should be well improved as some equations exceeded the page limit.


4. The authors compared their results with other in terms of accuracy, how about the computation time?


5. Details on the network parameters should be given.


6. Reference format should be well unified.

Comments on the Quality of English Language

Needs further improvements.

Author Response

Reviewer 3:

1.The contribution should be summarized in a more detailed way.

Answer: Thank you for your valuable feedback regarding the need for a more detailed summary of our research contributions. We appreciate your input, and we have taken steps to address this concern in our revised manuscript.In response to your suggestion, we have added Section 5.4, titled "Key Contributions of the P-AMIS-E Model," where we provide a more detailed summary of the contributions made by our research. This section now highlights the following key contributions:

Innovative Integration of Deep Learning Techniques: We emphasize that our primary contribution lies in the development of the Parallel-Artificial Multiple Intelligence System-Ensemble (P-AMIS-E) model, which uniquely combines advanced image segmentation methods (U-net, Mask-R-CNN) with a variety of CNN architectures (SqueezeNet, ShuffleNetv2 1.0x, MobileNetV3, InceptionV1). This integration allows for nuanced processing of visual data, particularly valuable for distinguishing highly similar species.

Enhanced Accuracy and Efficiency: Our model achieves an impressive classification accuracy of 98.41%, a substantial improvement over existing models such as ResNet-101 and Xception, which achieve an accuracy of 93.74%. This heightened accuracy is of great significance in industries such as pharmaceuticals, cosmetics, and herbal medicine, where precise species identification directly impacts product quality.

Effective Use of Limited Data: We address the common challenge of data scarcity in agricultural contexts by efficiently utilizing image augmentation techniques. This approach allows our model to train effectively on smaller datasets, overcoming a significant barrier faced by many existing classification systems.

Adaptability and Robustness: We highlight the role of the AMIS decision fusion strategy in enhancing our model's adaptability to different species and environmental conditions, a critical feature for agricultural applications.

Balancing Computational Efficiency: We have optimized the P-AMIS-E model to ensure it remains computationally efficient without compromising accuracy. This optimization makes the model accessible and practical for use in various settings, including those with limited computational resources.

Theoretical and Practical Implications: We underscore that our research has both theoretical and practical implications. Theoretical advancements include a deeper understanding of how deep learning techniques can be effectively applied to image classification tasks in agriculture. On the practical front, our research provides a tool that can be directly employed in industries reliant on accurate species identification, thereby improving product and service quality.

In summary, we believe that these contributions collectively position our research as a significant advancement in the application of deep learning techniques to real-world challenges in agriculture. We hope that this more detailed summary of our contributions addresses your concerns and provides a clearer understanding of the significance of our work.

 

2.What are the major limitations of existing results? And how do you solve those problems.

Answer: In our manuscript, we have added and highlighted several major limitations of existing agricultural species classification models, which include:

Difficulty in Distinguishing Similar Species: Traditional models often struggle to differentiate between species with closely resembling visual features, leading to classification inaccuracies, especially in the case of closely related cultivars. Dependence on Large Datasets: Many existing models rely on large datasets for training, which can be impractical and challenging to obtain in many agricultural contexts, where data may be limited or scarce.Rigidity in Adaptability: Conventional systems tend to be rigid and may lack the adaptability required to accommodate diverse species and varying environmental conditions. High Computational Demands: Advanced models can be computationally demanding, imposing significant resource constraints, particularly in resource-limited settings. In response to these limitations, our research introduces the Parallel-Artificial Multiple Intelligence System-Ensemble (P-AMIS-E) model, which addresses these issues through the following key innovations:

Accurate Classification of Similar Species: The P-AMIS-E model integrates U-net and Mask-R-CNN for image segmentation, coupled with various CNN architectures, enhancing its ability to accurately classify species with closely resembling features. This integration enables the model to overcome the challenge of distinguishing similar species. Effective Use of Limited Data: Our model overcomes the limitation of data availability by incorporating image augmentation techniques. These techniques enable efficient learning from limited datasets, making it a practical solution for agricultural scenarios with data constraints. Enhanced Adaptability: Through ensemble learning and the AMIS decision fusion strategy, our model becomes more adaptable and robust, allowing it to perform effectively in diverse agricultural settings with varying species and environmental conditions. Balanced Computational Efficiency: We have optimized our model to strike a balance between computational efficiency and accuracy. This optimization ensures that the P-AMIS-E model remains applicable in various settings, including those with limited computational resources. In summary, our research not only advances the theoretical framework of deep learning in image classification tasks but also provides practical solutions to the agricultural industry's need for precise species identification. We believe that these innovations significantly contribute to overcoming the limitations of existing results, making our approach a valuable addition to the field of agricultural species classification.

Once again, we appreciate your thoughtful review and hope that our responses adequately address your questions and concerns. If you have any further inquiries or suggestions, please do not hesitate to let us know.


3.CNN is not a new model, there are some new deep learning models, see, e.g., IEEE Transactions on Industrial Electronics, vol. 69, no. 12, pp. 13462-13472, 2022.

Answer: Thank you for your valuable feedback and for pointing out the relevance of the recent advancements in deep learning models, particularly the Dual-Path Mixed-Domain Residual Threshold Network (DP-MRTN) introduced by Chen et al. in 2022.We appreciate your suggestion, and we agree that highlighting the latest innovations in deep learning models is essential to provide a comprehensive overview of the field. As you mentioned, the DP-MRTN method represents a significant leap forward in the realm of fault diagnosis in noisy environments, achieving remarkable accuracy and surpassing traditional deep learning techniques. In response to your comment, we have already incorporated a brief summary of the DP-MRTN method and its advantages into our manuscript. We believe that this addition will help our readers understand the latest advancements in the field and appreciate how our work complements and builds upon these innovations. Once again, thank you for your insightful feedback, which has enhanced the quality and relevance of our research. If you have any further suggestions or comments, please do not hesitate to share them with us. Your input is highly valued, and we are committed to ensuring that our research aligns with the latest developments in the field.

4.What is your advantage? More discussions should be made as your problem is basically a classification problem.

Answer: We appreciate your feedback and suggestions for our research paper. We are pleased to inform you that we have carefully reviewed your comments and have taken steps to address them in our revised manuscript. Specifically, we have added Section 5.4, titled "Advantage of the Model and Research Limitations," to highlight the unique strengths of our approach and acknowledge potential limitations.

In Section 5.5, we have elaborated on the advantages of our method, emphasizing the following key points: Uniqueness of Approach: We have emphasized the uniqueness of our approach, which focuses on the integration of advanced deep learning techniques specifically designed for the challenging task of classifying agricultural species, such as the Centella Asiatica Urban (CAU) cultivar. Our approach tailors solutions to address the specific challenges posed by highly similar species, setting it apart from conventional classification models.Remarkable Accuracy: We have highlighted the remarkable accuracy achieved by our approach, which stands at 98.41%. This level of accuracy significantly surpasses that of traditional models like ResNet-101 and Xception, which achieve 93.74%. We have emphasized the importance of this heightened accuracy in industries such as pharmaceuticals and cosmetics, where precise species identification is critical for product quality and efficacy.

Handling Data Scarcity: We have discussed our method's enhanced ability to process and classify images with limited data availability, a common challenge in agricultural classification. The incorporation of ensemble image segmentation techniques, such as U-Net and Mask R-CNN, alongside various CNN architectures, contributes to improved performance and addresses data scarcity effectively. Innovative Ensemble Approach: We have introduced the innovative aspect of our study, the AMIS decision fusion strategy incorporated into the P-AMIS-E model. This strategy synergistically combines outputs from different deep learning models, resulting in a more robust and reliable classification system. We have emphasized the benefits of this integrated approach, particularly in handling the complexity and variability inherent in species like the CAU cultivar. Overall, we believe that these enhancements, along with our specific focus on CAU cultivar classification, make a substantial contribution to both the theoretical and practical advancements in the field of agricultural species classification.

We would like to express our gratitude for your thoughtful review, which has undoubtedly improved the quality and clarity of our research paper. If you have any further comments or suggestions, please feel free to share them with us. Your input is highly valued, and we are committed to ensuring the excellence of our work.


5.The presentation should be well improved as some equations exceeded the page limit.

Answer: Thank you for your valuable feedback regarding the presentation of equations in our manuscript. We appreciate your attention to detail and your guidance in improving the clarity and readability of our work.

Upon receiving your comment, we conducted a thorough review of the manuscript with a focus on the formatting of the equations. We have made the necessary adjustments to ensure that all equations are presented clearly and within the page limits. This involved optimizing the layout, adjusting equation sizes, and breaking longer equations into multiple lines where appropriate, ensuring that they are both legible and aesthetically fitting within the document's format.We understand that the clear presentation of equations is crucial for the accurate conveyance of our research findings and methodologies. We believe that these revisions have significantly improved the readability of our manuscript and adhere to the publication's formatting standards.

We are grateful for your constructive feedback, which has directly contributed to enhancing the quality of our manuscript. We hope that these changes meet your expectations and further the manuscript's potential for publication.Thank you once again for your insightful suggestions and for the opportunity to refine our work.

 
6.The authors compared their results with other in terms of accuracy, how about the computation time?

Answer: Thank you for your valuable feedback and for raising the question regarding computational time in our research. We have indeed considered computational time as a crucial aspect of our study, and we have addressed this concern in our manuscript.

In response to your query, we have included comprehensive details on computational time in Table 8 of our manuscript. This table provides a comparative analysis of various deep learning models, including both training and testing times. We categorized the models into three groups: single models, homogeneous ensembles, and our proposed heterogeneous ensemble.

Our discussion accompanying Table 8 elaborates on the computational time aspect. We provide insights into the training times, which vary across the models, and the testing times per image. This analysis allows readers to gain a comprehensive understanding of the trade-offs between accuracy and computational efficiency among different models.

In summary, our manuscript now offers a thorough evaluation of computational time, enabling readers to assess not only the accuracy of the models but also their efficiency in terms of time resources. We believe that this information adds valuable insights to the discussion and assists readers in making informed decisions when choosing an appropriate model for their specific requirements.

We appreciate your thoughtful review and hope that the inclusion of this information satisfactorily addresses your concerns. If you have any further questions or require additional details, please do not hesitate to reach out.


7.Details on the network parameters should be given.

Answer: Thank you for your constructive feedback regarding the need for providing details on the network parameters used in our research. We appreciate your input, and we have taken steps to address this concern in our revised manuscript. In response to your suggestion, we have incorporated Table 3 into the manuscript, which provides a comprehensive overview of the network parameters used in our study. This table includes essential details such as the type of network architecture, the number of layers, filter sizes, activation functions, and other relevant parameters. By including this table, we aim to offer readers a clear and concise reference point for understanding the specific network configurations employed in our experiments. We believe that these additional details will enhance the transparency of our research and facilitate a deeper understanding of our methodology. We sincerely appreciate your review and the opportunity to improve our manuscript. If you have any further questions or require additional information, please do not hesitate to contact us.


8.Reference format should be well unified.

Answer: Thank you for bringing to our attention the issue regarding the uniformity of the reference format in our manuscript. We understand the importance of maintaining a consistent and standardized format for references, as it not only adheres to publication guidelines but also enhances the professionalism and readability of the paper. In response to your comment, we have meticulously reviewed and revised the entire references section. We have ensured that every citation now strictly follows the specified referencing style of the journal. This revision includes standardizing the format for author names, publication dates, titles, journal names, volume and issue numbers, and page ranges, as well as ensuring consistency in the use of italics and punctuation as per the journal's guidelines. We recognize that uniform and accurate referencing is crucial for the integrity of academic writing and for providing due credit to original sources. We are committed to upholding these academic standards and are grateful for your guidance in improving this aspect of our manuscript.

We hope that these adjustments satisfactorily address your concerns and contribute to the manuscript's suitability for publication. Thank you once again for your constructive feedback and for helping us enhance the quality of our work.

 

Author Response File: Author Response.docx

Reviewer 4 Report

Comments and Suggestions for Authors

This is a well-performed study and a well-written manuscript. My comments are as follows:

1. Major contributions and novelty should clearly be presented in the introduction section. 

2. Details about how performance metrics are calculated from the multi-class confusion matrix should be included. 

3. Similarly, mathematical details about GradCAM should be provided. 

4. Will the model perform well on the images with non-white backgrounds? This aspect should be tested. 

5. How was the performance of the ensemble model based on simple majority voting? 

Comments on the Quality of English Language

Minor edits are required.

Author Response

Reviewer 4:

  1. Major contributions and novelty should clearly be presented in the introduction section.

Answer : Thank you for your valuable feedback and insightful suggestions regarding our manuscript. We appreciate your emphasis on the importance of clearly presenting the major contributions and novelty of our study in the introduction section.In response to your comment, we have revised the introduction to concisely yet effectively highlight the key contributions and innovative aspects of our research. We believe these revisions will provide readers with a clearer understanding of the significance and originality of our work from the outset. Here is a brief overview of the amendments we have made:

We have succinctly articulated the novelty of our Parallel-Artificial Multiple Intelligence System-Ensemble Deep Learning model (P-AMIS-E), emphasizing its unique integration of advanced deep learning techniques for the classification of Centella Asiatica Urban (CAU) cultivars.

The revised introduction now clearly delineates the major contributions of our study, including the ensemble of CNN architectures, advanced image segmentation techniques, and the innovative application of the Artificial Multiple Intelligence System (AMIS).

We have also underscored the practical and theoretical implications of our model in both the agricultural and pharmaceutical industries, as well as its contribution to the advancement of deep learning methodologies in image classification.

We hope that these revisions adequately address your concerns and enhance the manuscript's clarity and impact. We are grateful for the opportunity to improve our work and thank you for guiding us in this endeavor.

  1. Details about how performance metrics are calculated from the multi-class confusion matrix should be included.

Answer : Thank you for your insightful feedback and the opportunity to clarify the methodological aspects of our study. We appreciate your suggestion to include detailed information on how the performance metrics are calculated from the multi-class confusion matrix.In response to your comment, we have added Equations (18) to (21) in our manuscript. These equations provide a clear and comprehensive explanation of the methods used to calculate the key performance evaluation metrics - Accuracy, Precision, Recall, F1-score, and AUC - in the context of a multi-class confusion matrix.We believe that these additions will greatly enhance the manuscript's clarity and allow readers to better understand the analytical processes underlying our study. Furthermore, we hope that this detailed explanation will facilitate reproducibility and provide valuable insights for future research in this area. Once again, we thank you for your constructive feedback and are confident that these revisions will address your concerns effectively.

  1. Similarly, mathematical details about GradCAM should be provided.

Answer: Thank you very much for your constructive comment regarding the inclusion of mathematical details about Gradient-weighted Class Activation Mapping (Grad-CAM) in our manuscript. In response to your valuable suggestion, we have added a detailed explanation of the Grad-CAM process to our manuscript. This addition is intended to provide a clear understanding of how Grad-CAM functions mathematically, elucidating the mechanism by which our convolutional neural network-based models visualize and highlight important regions in the image for specific class predictions.

This new content, which can be found in [3.3/ page number 17], offers a comprehensive overview, starting from the calculation of feature maps and gradients, to the formulation of the Grad-CAM heatmap through a weighted combination of feature maps. We have also included details on the visualization process, where the Grad-CAM heatmap is normalized and overlaid on the input image.

We believe that this addition significantly enhances the manuscript by providing the theoretical underpinnings of Grad-CAM, thereby offering readers a deeper insight into the interpretative aspect of our models.We appreciate your guidance in improving the completeness and academic rigor of our paper and hope that this revision adequately addresses your suggestion. Thank you once again for your invaluable feedback.

  1. Will the model perform well on the images with non-white backgrounds? This aspect should be tested and 5. How was the performance of the ensemble model based on simple majority voting?

Answer: Thank you for your insightful comments regarding the performance of our model on images with non-white backgrounds and the efficacy of the ensemble model using simple majority voting. We appreciate the opportunity to clarify and expand upon these aspects of our research. In response to your first comment, we conducted additional experiments to specifically test the model's performance on images with non-white backgrounds. This was achieved using the CALU-3G dataset, which exclusively comprises images with normal (non-white) backgrounds. The results of these tests, presented in Table 10 of our manuscript, demonstrate that the model not only performs well but also shows improved accuracy and robustness in classifying images with more complex and realistic background scenarios compared to the CALU-1 and CALU-2 datasets.

Regarding your second comment on the performance of the ensemble model based on simple majority voting, the results in Table 10 also address this inquiry. The ensemble model employing simple majority voting exhibited commendable performance on the CALU-3G dataset, with metrics closely competing with and in some cases surpassing those of other advanced models. This finding is indicative of the ensemble model's capability to effectively integrate individual predictions, even in the more challenging context of non-white background images.

We have added detailed explanations and discussions in our manuscript to elucidate these findings. These additions, as per your suggestions, aim to provide a comprehensive understanding of the model's adaptability to varying background conditions and the effectiveness of different decision fusion strategies, particularly simple majority voting, in enhancing the model's performance.

We hope that these amendments and additional experimental results adequately address your concerns and contribute to a clearer and more complete representation of our study's findings. Thank you once again for your valuable feedback, which has greatly contributed to improving the quality and comprehensiveness of our research.

 

Author Response File: Author Response.docx

Reviewer 5 Report

Comments and Suggestions for Authors

The experiment and data of the paper are sufficient, but it needs to be fully modified.

Firstly, the introduction should be reorganized. You should first emphasize the significance of CAU cultivar classification. And then introduce the drawback or pain point of the related work instead of listing the references in your first and second paragraph. You should organize the related work in details in the second section: Related Literatures. Then you should explain the advantage of the method(maybe here it is the ensemble) you proposed in your paper, comparing to the other method. Meanwhile, you should explain why your method could work. The penultimate paragraph of the introduction section should be the significance of the paper and should be at the beginning of the introduction.

Secondly, in the Research Methods section, you should highlight the best results in bold. The parameters of model and the environment of the experiment(equipment) should be declared in details. You only compare with the single model to illustrate the generalization of your model in the experiment about unseen data, but I can not see the comparation with non-ensemble model. Because your paper’s innovation is about the ensemble method, I think you should compared with non-ensemble here to illustrate your highlight.

 

Suggestions to the Authors:

Refinement of Introduction and Related work: You should first emphasize the significance of CAU cultivar classification. And then introduce the drawback or pain point of the related work instead of listing the references in your first and second paragraph. You should organize the related work in details in the second section: Related Literatures. Then you should explain the advantage of the method(maybe here it is the ensemble) you proposed in your paper, comparing to the other method. Meanwhile, you should explain why your method could work. The penultimate paragraph of the introduction section should be the significance of the paper and should be at the beginning of the introduction.

Re-evaluate the Model's Efficiency: In the Research Methods section, you should highlight the best results in bold. The parameters of model and the environment of the experiment(equipment) should be declared in details. You only compare with the single model to illustrate the generalization of your model in the experiment about unseen data, but I can not see the comparation with non-ensemble model. Because your paper’s innovation is about the ensemble method, I think you should compared with non-ensemble here to illustrate your highlight.

Reorganize the Literature Review: When you list the other people’s work, you do not need to list too much results(figures), you need to explain more about the advantage or drawback of the work.

 

 

Comments on the Quality of English Language

While it is important to provide detailed experimental results, it is also crucial to strike a balance to avoid overwhelming the reader. Aim for clarity and conciseness in your writing. Avoid redundant sentences or formulas. You need to improve your writing and correct the language errors.

 

Author Response

Reviewer 5:

The experiment and data of the paper are sufficient, but it needs to be fully modified.

  1. Refinement of Introduction and Related work: You should first emphasize the significance of CAU cultivar classification. And then introduce the drawback or pain point of the related work instead of listing the references in your first and second paragraph. You should organize the related work in details in the second section: Related Literatures. Then you should explain the advantage of the method(maybe here it is the ensemble) you proposed in your paper, comparing to the other method. Meanwhile, you should explain why your method could work. The penultimate paragraph of the introduction section should be the significance of the paper and should be at the beginning of the introduction.

            Answer : Thank you for your insightful comments and suggestions regarding the introduction of our manuscript titled "Automated Classification of Agricultural Species through Parallel-Artificial Multiple Intelligence System-Ensemble Deep Learning." We greatly appreciate the time and attention you have devoted to reviewing our work, and your feedback has been invaluable in enhancing the quality of our paper.

In line with your recommendations, we have reorganized the introduction section to more effectively emphasize the significance of the Centella Asiatica Urban cultivar (CAU) classification. We begin by highlighting the importance and challenges associated with the precise identification of CAU cultivars, particularly in the agricultural and pharmaceutical sectors. This is followed by a critical assessment of the limitations present in existing methods for plant classification, where we address the drawbacks and pain points you pointed out.

Furthermore, we have restructured the content to provide a detailed overview of related works in the second section, titled "Related Literatures." This section now thoroughly discusses previous methodologies and their respective contexts, ensuring a comprehensive understanding of the current state of research in this field.

Additionally, we have elaborated on the advantages of our proposed method, the P-AMIS-E model, comparing it to existing methods. We explain why our approach is beneficial and how it addresses the shortcomings of previous techniques. Lastly, the revised introduction concludes with a statement on the significance of our paper, highlighting its contribution to the advancement of cultivar classification and its practical implications.

We believe these revisions have substantially improved the clarity and flow of our introduction, aligning it more closely with the expectations for our research paper. We hope that these changes meet your approval and further the manuscript's suitability for publication. Thank you once again for your constructive feedback. We look forward to any further suggestions you may have.

  1. Re-evaluate the Model's Efficiency: In the Research Methods section, you should highlight the best results in bold.

Answer : Thank you for your valuable feedback and suggestions. We appreciate your input in improving the quality of our article.

 

Regarding your comment about highlighting the best results in bold within the Research Methods section, we have carefully re-evaluated the model's efficiency as per your recommendation. We have now incorporated the necessary changes by highlighting the best results in bold to enhance the clarity of our findings. We believe this enhancement will provide readers with a more accessible understanding of our research outcomes. We sincerely appreciate your guidance in this matter and hope that these revisions align with your expectations. Once again, we would like to express our gratitude for your insightful feedback, which has undoubtedly contributed to the refinement of our article.

 

  1. The parameters of model and the environment of the experiment(equipment) should be declared in details.

Answer : We appreciate your valuable feedback and your continued interest in our research paper. In response to your comment regarding the need for detailed information, we have made significant improvements to our manuscript to ensure transparency and clarity.To address this concern, we have included a dedicated section in our paper where we provide comprehensive details about the parameters of our model and the experimental environment. We understand the importance of providing this information to facilitate reproducibility and a better understanding of our work. The added information, as shown in Table 3, provides a clear and concise summary of the key parameters of our model and the specific details of the experimental setup, including equipment specifications. This table serves as a reference point for readers seeking in-depth insights into our research. We believe that these additions will enhance the overall quality of our paper and address your request for detailed information. We welcome any further comments or suggestions you may have and remain committed to improving our work based on your insights.

 

Thank you for your time and consideration.

 

  1. You only compare with the single model to illustrate the generalization of your model in the experiment about unseen data, but I can not see the comparation with non-ensemble model. Because your paper’s innovation is about the ensemble method, I think you should compared with non-ensemble here to illustrate your highlight.

Answer : We appreciate your valuable feedback on our research paper and have made the necessary adjustments. We have focused on addressing your request regarding the comparison with non-ensemble models.

In Table 8, we present a comprehensive comparison of various deep learning models, evaluating their performance metrics on the CALU-2 dataset. These models fall into three categories: single models, homogeneous ensembles, and our proposed heterogeneous ensemble.

Starting with single models, ResNet-101, Xception, NASNet-A Mobile, and MobileNetV3-Large, characterized by substantial sizes ranging from 84 MB to 113 MB, exhibit commendable accuracy, scoring between 90.1% and 92.6%. However, their training times are significantly high, particularly ResNet-101, which consumes 62.48 minutes. Testing time per image varies from 0.48 to 1.34 seconds, with MobileNetV3-Large being the fastest. These single models offer respectable accuracy but may have limitations due to their size and extensive training times.

Turning to SqueezeNet, ShuffleNetv2 1.0x, MobileNetV3, and InceptionV1, we encounter a diverse set of single models, characterized by relatively lightweight sizes (5 MB to 20 MB). These models achieve competitive accuracy scores, all surpassing the 75% mark, while presenting trade-offs between accuracy and computational efficiency.

Transitioning to homogeneous ensemble models, which include SqueezeNet, ShuffleNetv2 1.0x, MobileNetV3, and InceptionV1, we observe their unification of models of the same type. These homogeneous ensembles achieve remarkable accuracy, averaging around 94%. However, this gain in accuracy coincides with lengthier training periods, ranging from 37.58 to 44.19 minutes, which is substantially longer than single models.

In contrast, our proposed heterogeneous ensemble method, amalgamating diverse models, including SqueezeNet, ShuffleNetv2 1.0x, MobileNetV3, and InceptionV1, emerges with an outstanding accuracy of 98.5%. Notably, this remarkable accuracy is attained with significantly shorter training durations, as low as 34.59 minutes, alongside efficient testing times of 0.34 seconds per image. This underscores the potency of leveraging diverse model architectures within ensemble learning.

 

In summary, our proposed heterogeneous ensemble method excels in terms of accuracy and computational efficiency when compared to both single models and homogeneous ensembles. This emphasizes the advantage of harnessing diverse model architectures to achieve exceptional accuracy while optimizing computational resources, making it a compelling approach for image classification tasks, such as CAU cultivar classification. We believe that these additional details provide a more comprehensive understanding of our research, and we are grateful for your insightful feedback.

 

  1. Reorganize the Literature Review: When you list the other people’s work, you do not need to list too much results(figures), you need to explain more about the advantage or drawback of the work.

Answer : Thank you for your feedback on the Literature Review section. We have carefully reorganized and rewritten the section to address your concerns and provide a more focused discussion on the advantages and drawbacks of the referenced works. In doing so, we aim to provide a clearer understanding of how these works contribute to the research and its significance. We believe that this revised approach will enhance the quality of the Literature Review section and provide more valuable insights for readers.

 

  1. While it is important to provide detailed experimental results, it is also crucial to strike a balance to avoid overwhelming the reader. Aim for clarity and conciseness in your writing. Avoid redundant sentences or formulas. You need to improve your writing and correct the language errors.

Answer : Thank you for your valuable feedback regarding the presentation of experimental results and the overall writing style in the paper. We appreciate your guidance in achieving a better balance in providing details while ensuring clarity and conciseness. We have already taken steps to have the manuscript's English reviewed by a professional proofreading service from "MDPI" to address language errors and improve the overall quality of the writing.

 

We will carefully review the paper to identify and eliminate any redundant sentences or formulas to enhance the readability and effectiveness of the manuscript. Our goal is to provide a well-structured and easily understandable document that effectively communicates the research findings. Your feedback is highly appreciated, and we are committed to delivering an improved version of the paper that meets the desired standards.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

1. 3.2.3  "Ensemble CNN models" , prove a model/ diagram ( CNN based) to support the idea.

2. Figure 6 , in confusion matrix properly give horizontal & vertical levels .

 

Comments on the Quality of English Language

Minor modification is needed.

Author Response

Answer reviewer R2

Reviewer 1:

  1. 3.2.3  "Ensemble CNN models" , prove a model/ diagram ( CNN based) to support the idea.

Answer : Thank you for your valuable feedback regarding Section 3.2.3 "Ensemble CNN Models" of our manuscript. We appreciate your insightful suggestions and have revised this section to incorporate a more detailed explanation of our ensemble CNN model and its decision fusion strategy. Below, we outline the key changes and additions made in response to your comment:

 

  • Specificity of CNN Models Used: As suggested, we have now included specific details about the CNN models used in our ensemble. This includes the integration of four SqueezeNet instances, three ShuffleNetv2 1.0x models, one Inception v1 model, and three MobileNetV3 models. We have elaborated on the reasons for selecting these models, focusing on their balance between size efficiency and predictive accuracy, aligning with the model size constraint of under 80 MB.
  • Introduction of AMIS in Decision Fusion: In response to your feedback, we have added a comprehensive description of the Artificial Multiple Intelligence System (AMIS) utilized in the decision fusion layer of our ensemble model. We explain how AMIS intelligently synthesizes the outputs from the various CNN models to achieve a unified and accurate classification decision. This addition emphasizes the novel application of AMIS in enhancing the ensemble model's effectiveness, particularly in fast-response applications.
  • Add the Diagram: Accompanying the revised text, we have included an updated diagram (Figure 4) that visually represents the architecture of our ensemble CNN model. This diagram clearly shows the flow from individual CNN models to the AMIS-based decision fusion layer, providing a visual aid to our textual description.

fig4

Figure 4. Proposed ensemble architecture.

We believe these revisions address your concerns and enhance the clarity and depth of our manuscript. The modifications not only provide a clearer understanding of the ensemble CNN model and its components but also highlight the innovative application of AMIS in decision fusion, crucial for the classification of CAU cultivars.We thank you again for your constructive feedback and hope that our revisions meet your expectations for technical accuracy and comprehensiveness.

 

  1. Figure 6, in confusion matrix properly give horizontal & vertical levels.

Answer : Thank you for your insightful comments regarding the clarity of Figure 6 (now modify as Figure 7) in our manuscript. We have taken your feedback into consideration and made the necessary revisions to ensure that the confusion matrices are clearly understood.

Labeling of Axes: We have updated the horizontal and vertical axes labels to accurately reflect the predicted and actual classes, respectively. Each axis is now clearly labeled with the full names of the regions represented by the respective abbreviations, as per the list provided.

Updated Figure Caption: The caption of Figure 6 has been revised to include the full names corresponding to each abbreviation used in the confusion matrices. This ensures that readers can easily understand the regions each abbreviation represents without referring back to the text.

Manuscript Context: Additionally, we have incorporated a detailed explanation within the manuscript that elaborates on the interpretation of the confusion matrices. This explanation provides a clear understanding of how to read the matrices, the significance of the diagonal and off-diagonal cells, and the implications of these findings for the performance of our models across different regional cultivars.

 

We trust that these revisions address your concerns and enhance the interpretability of the results presented in Figure 6. We appreciate the opportunity to improve the quality of our manuscript and thank you again for your valuable feedback.

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

 Accept in present form

Comments on the Quality of English Language

Minor editing of English language required

Author Response

Answer reviewer R2

Reviewer 2

  1. Accept in present form

Answer : We are extremely pleased to receive your comment and would like to express our sincere gratitude for your positive assessment of our work.We appreciate the time and effort you have dedicated to reviewing our manuscript, and your insightful comments have been invaluable throughout the revision process.Thank you for your support and for considering our paper suitable for publication. We look forward to seeing the paper contribute to the literature and hope it will facilitate further research in the field.

  1. Minor editing of English language required.

Answer : Thank you for your careful reading of our manuscript and for pointing out the need for minor English language editing.In response to your comment, we have thoroughly reviewed the entire manuscript to address this concern. We have conducted a detailed proofreading and editing process to ensure clarity, grammar, punctuation, and overall flow of the English language throughout the paper. Additionally, we sought the assistance of a native English-speaking colleague with expertise in our field to review the manuscript for any subtle nuances.We are confident that these revisions have improved the quality of the manuscript and have resolved the language concerns you raised. We hope that the changes meet your approval and that the manuscript is now suitable for publication.We appreciate your guidance and the opportunity to enhance our paper

 

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

No Comments.

Comments on the Quality of English Language

Minor edits are needed.

Author Response

Answer reviewer R2

Reviewer 4

1.No Comments.

Answer : We are extremely pleased to receive your comment and would like to express our sincere gratitude for your positive assessment of our work.We appreciate the time and effort you have dedicated to reviewing our manuscript, and your insightful comments have been invaluable throughout the revision process.Thank you for your support and for considering our paper suitable for publication. We look forward to seeing the paper contribute to the literature and hope it will facilitate further research in the field.

  1. Minor edits are needed.

Answer : Thank you for your careful reading of our manuscript and for pointing out the need for minor English language editing.In response to your comment, we have thoroughly reviewed the entire manuscript to address this concern. We have conducted a detailed proofreading and editing process to ensure clarity, grammar, punctuation, and overall flow of the English language throughout the paper. Additionally, we sought the assistance of a native English-speaking colleague with expertise in our field to review the manuscript for any subtle nuances.We are confident that these revisions have improved the quality of the manuscript and have resolved the language concerns you raised. We hope that the changes meet your approval and that the manuscript is now suitable for publication.We appreciate your guidance and the opportunity to enhance our paper

 

Author Response File: Author Response.pdf

Reviewer 5 Report

Comments and Suggestions for Authors

This version is fine. Good work.

Author Response

Answer reviewer R2

 

Reviewer 5

This version is fine. Good work.

Answer : We are extremely pleased to receive your comment and would like to express our sincere gratitude for your positive assessment of our work.We appreciate the time and effort you have dedicated to reviewing our manuscript, and your insightful comments have been invaluable throughout the revision process.Thank you for your support and for considering our paper suitable for publication. We look forward to seeing the paper contribute to the literature and hope it will facilitate further research in the field.

 

Author Response File: Author Response.pdf

Back to TopTop