Next Article in Journal
Reliability Analysis of Pyrotechnic Igniter for Hydrogen-Oxygen Rocket Engine with Low Temperature Combustion Instability Failure Mode
Next Article in Special Issue
Understanding Negotiation: A Text-Mining and NLP Approach to Virtual Interactions in a Simulation Game
Previous Article in Journal
Fabrication of Low-Twist and High-Strength Metallic Fibre Hybrid Spun Yarns
 
 
Article
Peer-Review Record

A Text Segmentation Approach for Automated Annotation of Online Customer Reviews, Based on Topic Modeling

Appl. Sci. 2022, 12(7), 3412; https://doi.org/10.3390/app12073412
by Valentinus Roby Hananto 1,2,*, Uwe Serdült 3,4 and Victor Kryssanov 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2022, 12(7), 3412; https://doi.org/10.3390/app12073412
Submission received: 25 February 2022 / Revised: 23 March 2022 / Accepted: 23 March 2022 / Published: 27 March 2022
(This article belongs to the Special Issue Advanced Computational and Linguistic Analytics)

Round 1

Reviewer 1 Report

This paper is about a text segmentation approach for automated annotation of online customer reviews, based on Topic Modeling. I have the following comments:

  1. I see some of related work presented in the Introduction section. This is normal but I suggest combining the first 2 sections into one section and name it Introduction and Related Work.
  2. Please provide a section to describe the limitations of related work at the end
  3. Regarding the dataset (Section 4.1) is it balanced or imbalanced? If imbalanced, how did you solve this issue?
  4. Explain more the Choi dataset. For example, what type of documents?
  5. Provide the limitations of the proposed algorithm
  6. The authors stated that the proposed approach produced results similar to or better than baseline methods. My question is what about the complexity of the proposed system in comparison with other systems?
  7. It is recommended to send the manuscript to a professional English reviewer. Some sentences are really long.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

This paper addresses an interesting and important problem. My main concern is with how the results are evaluated -- the "gold standard" labels. When a human labels an observation as class c, it cannot be counted as wrong for an algorithm to do so as well, unless the definition of the gold standard is that all human reviewers agree. Either use only those labels for which all agree (preferred), or allow for more flexibility. Tables 5 and 6 illustrate this (although these use the estimated rather than human labels). I would argue with the assigned labels for the first two segments. Also, it is implied that the bronze standard labels are only used for training, not testing, but this should be emphasized. Overall the paper makes your case for your method, but the gold standard set needs to be adjusted.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop