Next Article in Journal
The Impact of the S-Adenosylmethionine Analogue Sinefungin on Viral Life Cycles
Next Article in Special Issue
Retrieval-Augmented Generation to Generate Knowledge Assets and Creation of Action Drivers
Previous Article in Journal
Physical Health Portrait and Intervention Strategy of College Students Based on Multivariate Cluster Analysis and Machine Learning
Previous Article in Special Issue
Vegetarianism Discourse in Russian Social Media: A Case Study
 
 
Article
Peer-Review Record

A Comparative Evaluation of Transformers and Deep Learning Models for Arabic Meter Classification

Appl. Sci. 2025, 15(9), 4941; https://doi.org/10.3390/app15094941
by A. M. Mutawa 1,2,* and Sai Sruthi 1
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Appl. Sci. 2025, 15(9), 4941; https://doi.org/10.3390/app15094941
Submission received: 20 March 2025 / Revised: 23 April 2025 / Accepted: 28 April 2025 / Published: 29 April 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors
  1. The paper focuses on the challenging task of Arabic poetry meter classification, systematically comparing various Transformer models and deep learning models, thereby filling the research gap in Transformer-based studies in this domain. By integrating LIME for model interpretability analysis, the depth of the research is significantly enhanced.
  2. The references lack sufficient coverage of studies published in the past two years. To strengthen the literature review, it is recommended to incorporate more recent publications to reflect the latest advancements in the field
  3. Here are two papers that I suggest you cite: Homophilic and Heterophilic-Aware Sparse Graph Transformer for Financial Fraud Detection; Meter classification of Arabic poems using deep bidirectional recurrent neural networks; Meter classification of Arabic poems using deep bidirectional recurrent neural networks;

Author Response

See attached file please

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The paper presents a study on Arabic meter classification by comparing transformer-based models and deep learning architectures using half-verse data from the MetRec dataset. The research question—evaluating model performance for metre classification in Arabic poetry—is well articulated and addressed through a comprehensive experimental setup. The paper is original in its use of half-verse data and the breadth of models considered, notably demonstrating that CAMeLBERT outperforms other architectures with 90.62% accuracy. This work adds value to the field by highlighting the effectiveness of transformer models in capturing metrical patterns in Arabic, a language with complex morphology and prosody. The methodology is generally sound, though the paper would benefit from clearer descriptions of data preprocessing and more rigorous hyperparameter optimisation. The conclusions are consistent with the presented evidence and clearly respond to the research question. References are largely appropriate, though incorporating more recent literature and classical linguistic sources would strengthen the context. Tables and figures are informative, with confusion matrices and LIME visualisations effectively supporting the analysis, though minor enhancements to labelling and commentary are suggested. For further enhancement, the following suggestions are proposed:

  1. Improve the clarity of the English language.
  2. Include a section on the study's limitations and future work. This addition will provide a clearer understanding of the research's context and its potential impact.
  3. Expand the literature review to include more recent references, ensuring all significant developments in NLP for Arabic text processing are covered.
  4. Enhance the methods section with more details on data preprocessing and model training specifics to improve reproducibility. Additionally, enrich the introduction with a more comprehensive background on NLP techniques used in Arabic poetry analysis and recent advancements to better contextualise the study.

Author Response

See attached file please

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The authors propose a paper on comparative Evaluation of Transformers and Deep Learning Models for Arabic Meter Classification. In their proposals, they evaluate and compare Arabic-Bert, Arabert, Marbert, Araelectra, Camel-Bert, and Arbert with BilstM and BigRu DL models. In their proposal, half-verse poems were analyzed and batch sizes were varied. They employed different encoding methods such as tokenizer (WordPiece and SentencePiece) for transformer models and character level encoding for DL models. In conjunction, they use the LIME machine learning tool to analyze features for clarification. The present work is recommended to be published taking into consideration the following suggestions:

*In the abstract add more about the experimental results obtained in the work.

*Add more about the Examining meters in poetry in the introduction.

*Include studies on natural language processing (NLP) methods focused on Arabic Meter
Classification. Also, a detail that describes the use of CAMeLBERT-based models focused on the
study of different variants of Arabic, such as Modern Standard Arabic (MSA), Dialectal Arabic (DA), and Classical Arabic (CA).

*In section 3.1 Highlight of the Dataset [30] some useful statistics on the dataset are missing.

*In section 3.1 Materials and Methods there are missing some examples with images of the labels and verses used in the dataset.

*In section 3.4 they use several libraries such as TensorFlow 2.7, Transformers 4.48.1, Scikit-learn
1.0, PyArabic 0.6.14, lime 0.2.0.1.

*In the conclusion it is recommended to restructure and highlight that there are no studies based on the half-verse method or the transformation models method.

Comments for author File: Comments.pdf

Author Response

See attached file please

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

This manuscript investigates the use of advanced deep learning models such as transformers to classify Arabic poetry that follow complex rhythmic patterns.

 

The main question addressed by the research presented in this manuscript is the effectiveness of transformer-based models compared to traditional deep learning models for the automatic classification of Arabic poetry meters (prosody). In particular, the study focuses on evaluating different pre-trained transformer models (Arabic-BERT, AraELECTRA, …) and deep learning models (BiLSTM, BiGRU, …) to assess their performance in accurately categorizing the complex rhythmic patterns in Arabic poetry.

 

The topic is original and relevant to the field, and in the introductory section the authors have clarified the research gap that this study aims to address: the limited investigation of the use of transformer-based models for the classification of Arabic poetry meters. By investigating the effectiveness of various pre-trained transformer models and their interpretability using techniques such as LIME, this manuscript seeks to improve the understanding and applicability of these models in the field of Arabic natural language processing (NLP) and poetry analysis.

 

Regarding the methodology: A comprehensive description of the methodology is also provided by the authors as well as graphical representation of the proposed approach.

 

The authors have presented their work well from both a practical and theoretical point of view. There are 13 figures and 8 tables. The work is technically sound and the references given by the authors are applicable and relevant, there are 44 citations.

 

I have no particular concerns regarding this manuscript, conclusions are consistent and supported by the experimental results.  

 

Please consider the following corrections and comments:

 

Please correct the reference “ 33. 최용석; 이공주. Performance Analysis of Korean Morphological Analyzer based on Transformer and BERT. Journal of KIISE 588 2020, 47, 730-741, doi:10.5626/JOK.2020.47.8.730. „  because the first symbols in the pdf version of the manuscript are unclear.

 

Author Response

See attached file please

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Thank you for providing the revised version. All comments have been thoroughly addressed.

Back to TopTop