Next Article in Journal
Selective Wander Join: Fast Progressive Visualizations for Data Joins
Previous Article in Journal
Improvement in the Efficiency of a Distributed Multi-Label Text Classification Algorithm Using Infrastructure and Task-Related Data
Article Menu

Export Article

Open AccessArticle
Informatics 2019, 6(1), 13; https://doi.org/10.3390/informatics6010013

Creating a Multimodal Translation Tool and Testing Machine Translation Integration Using Touch and Voice

1
IOTA Localisation Services/Trinity Centre for Literary and Cultural Translation, Trinity College Dublin, D02 CH22 Dublin, Ireland
2
ADAPT Centre/School of Applied Language and Intercultural Studies, Dublin City University, D09 Y074 Dublin, Ireland
3
ADAPT Centre/School of Computing, Dublin City University, D09 Y074 Dublin, Ireland
*
Author to whom correspondence should be addressed.
Received: 21 December 2018 / Revised: 13 March 2019 / Accepted: 15 March 2019 / Published: 25 March 2019
(This article belongs to the Special Issue Advances in Computer-Aided Translation Technology)
  |  
PDF [3315 KB, uploaded 25 March 2019]
  |     |  

Abstract

Commercial software tools for translation have, until now, been based on the traditional input modes of keyboard and mouse, latterly with a small amount of speech recognition input becoming popular. In order to test whether a greater variety of input modes might aid translation from scratch, translation using translation memories, or machine translation postediting, we developed a web-based translation editing interface that permits multimodal input via touch-enabled screens and speech recognition in addition to keyboard and mouse. The tool also conforms to web accessibility standards. This article describes the tool and its development process over several iterations. Between these iterations we carried out two usability studies, also reported here. Findings were promising, albeit somewhat inconclusive. Participants liked the tool and the speech recognition functionality. Reports of the touchscreen were mixed, and we consider that it may require further research to incorporate touch into a translation interface in a usable way. View Full-Text
Keywords: computer-aided translation; usability; agile development; multimodal input; translation technology computer-aided translation; usability; agile development; multimodal input; translation technology
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Teixeira, C.S.C.; Moorkens, J.; Turner, D.; Vreeke, J.; Way, A. Creating a Multimodal Translation Tool and Testing Machine Translation Integration Using Touch and Voice. Informatics 2019, 6, 13.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Informatics EISSN 2227-9709 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top