Next Article in Journal
Art in the Age of Social Media: Interaction Behavior Analysis of Instagram Art Accounts
Previous Article in Journal
Smart Energy Transition: An Evaluation of Cities in South Korea
Previous Article in Special Issue
Translation Quality and Error Recognition in Professional Neural Machine Translation Post-Editing
 
 
Article
Peer-Review Record

Speech Synthesis in the Translation Revision Process: Evidence from Error Analysis, Questionnaire, and Eye-Tracking

Informatics 2019, 6(4), 51; https://doi.org/10.3390/informatics6040051
by Dragoş Ciobanu 1,*, Valentina Ragni 2 and Alina Secară 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Informatics 2019, 6(4), 51; https://doi.org/10.3390/informatics6040051
Submission received: 1 July 2019 / Revised: 1 November 2019 / Accepted: 5 November 2019 / Published: 11 November 2019
(This article belongs to the Special Issue Advances in Computer-Aided Translation Technology)

Round 1

Reviewer 1 Report

Very interesting research on a topic that has not attracted so much interest among translation studies scholars. The idea of integrating ASR in the revision process is challenging, as showed  already in the hardware problems you encountered. This is an exploratory study, but it is promising and I hope you will be able to reproduce the study with more participants. I'm looking forward to reading the next paper with all the other results.

I have only a few minor comments, marked in the text.

Comments for author File: Comments.pdf

Author Response

Reviewer 1 in-line comments were all addressed in the revised version.

 

Reviewer 2 Report

It would be valuable to include a more extensive review of eye tracking studies of translation (revision, evaluation, etc.) in the review of literature. 

Author Response

Comment:

It would be valuable to include a more extensive review of eye tracking studies of translation (revision, evaluation, etc.) in the review of literature.  

Response:

We have restructured our article in order to present a better-argumented Literature Review section and methodology. 

 

Reviewer 3 Report

I read the manuscript titled "Speech synthesis in the translation and revision process: Evidence from eye-tracking and error analysis" with much interest. It is a well written paper with challenging research questions using eye-tracking (ET) technology in an interesting way. To my knowledge it is the first study in translation studies that observes translation processes with integrated speech synthesis using ET. However, as the authors indicate themselves, the paper only presents an 'initial' data analysis. Which makes the paper rather ambiguous to me. What is the paper's main aim?
(a) Presenting an observation method that is suitable for this kind of setting;
(b) Presenting the study's results (qualitative and quantitative);
(c) Formulate practical advice to translators.
When I read the abstract, the focus seems to be on (a) and (b); when I look at the research questions (Table 1), the focus is on (b). When I read the complete article, it is unclear to me…

Based on this concern, and my further comments listed below (mainly related to the experimental design), I would recommend the editors to reject this paper, as I think that the problematic design in combination with the confusing focus of the paper is hard to repair in a revised version of this paper.

Main comments:

Abstract
Although the paper itself mainly focuses on reporting some results of the study at hand, the abstract does not report any results, which is not in balance for me.
Research questions
Five professional translators and ten trainees have been involved in the study. This makes it hard to claim that a quantitative analysis will be presented in this paper (next to a qualitative analysis). Moreover, the difference in size between both groups - and the substantial within group differences - also make it impossible to compare both groups as such. I would contend that a well-motivated case analysis would be a much better option in this case.

The RQ about eye-tracking is hardly introduced, nor motivated (see also page 6, Research design). A theoretical framework is missing with respect to quite some aspects in these questions.

Experimental texts
I understand that the authors strive at keeping the texts as realistic as possible, but I don't understand the imbalance with respect to the number of errors and the distribution over the error categories. For instance, text 1 is characterized by 6 stylistic and 0 terminological errors as opposed to text 2: 10 stylistic and 1 terminological errors. I don't see the rationale behind these choices, which complicate the analyses to a large extent.
Experimental design
There is an extreme imbalance in the distribution of experimental groups and tasks which is not explained, nor taken into account in the further analysis. The distribution of the participants over the four groups vary between 1 and 6, and none of the professional translators participated in either of the G2 groups. Next in the Result section, the authors mention that they will report their results using a subset of these data (and for RQ3 a case study is presented). Moreover, no details are reported about possible task or order effects (as might be inferred from Figure 1-3). From a methodological point of view this is quite problematic (and hard to repair unless extra participants will be added).
No interrater reliability has been reported.

Finally, the section end by summing up characteristics of the design that allow the researchers to address aspects of the research question. However, some of them are not theoretically introduced (e.g., relation between fixation length and problematic 'phrases'), while others are really doubtful to me (e.g., real-life working conditions) as the eye tracker really imposed articulated experimental conditions and the participants were not used to the speech synthesis. The limitations that are related to this design, are not explicitly discussed in the Discussion section.

Data preparation
Explain the extraction of individual and cumulative fixation data more in depth.
Result section
Tables contain irrelevant information. For instance in Table 4: the condition reported in the 4th and 5th column is not adding any information as no order or task effects were reported; moreover the ID-ordering is unclear.

Figures 1 to 3 are not really informative and hard to interpret. moreover, the standard deviation is missing (boxplots?). I think if the authors could focus more explicitly on the differences between SS and NS (and further aggregate the data) this section would be much more self-explaining. (Non-parametric) statistics is missing.

Making a difference between the trainees and professional translators (Figure 2 and 3) is a focus that I would avoid, taking the number of participants in both groups into account.

There is no full textual explanation of the data presented in the tables and the figures.

Questionnaire:
- why do the authors opt for a very rough three-level Likert scale?
- the results are not related to the participants' experience.

Eye-movement analysis
- frame this analysis more explicitly as a case study (not 'small subset');
- explain the selection criteria more in depth (representative case for the corpus?);
- AOI: how are these defined (content wise and technically);
- most information in the first paragraph should be included in the method section;
- what about fixations during revision episodes (versus reading)?
- there is a reference missing (Error! indication);
- I expect much more explanation and deeper analysis of these case data (e.g. with respect to Figure 4 and 5, and the relevant differences between both figures). It is unclear now how the observed differences in Figure 4 and 5 can be related to the use of speech.

Results
"First of all it was surprising to see that not even professional translators corrected all the errors…" Why is that surprising?
The authors try to generalize their findings. It is very delicate to base conclusions like 'higher' and 'lower' on descriptive data (with high variance).

Author Response

We are very grateful to the reviewer for detailed and helpful comments. We believe we have addressed all of them by slightly modifying our title and producing a restructured paper with a much narrower and explicit focus, which also highlights clearly our limitations and no longer makes generalisations about our results.

In addition to our heavily revised version, please find our answers to all the specific points raised in the attached file.

Author Response File: Author Response.docx

 

Round 2

Reviewer 1 Report

Just a few suggestions for lay-out:

alignment under 2.1, 1) ))> alignment

tables split on two pages

Figures 1 and 2 are now too small, figure 3 too, not legible when you print it

Very interesting work!

Author Response

Response to comments attached. Many thanks once again for the very useful suggestions.

Author Response File: Author Response.docx

Back to TopTop