Next Article in Journal
Passive Wireless Measurement System Based on Wireless Power Transfer Technology
Previous Article in Journal
Application of L1 Trend Filtering Technology on the Current Time Domain Spectroscopy of Dielectrics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Mapping of Translation-Enabling Technologies for Sign Languages

by
Luis Naranjo-Zeledón
1,2,*,
Jesús Peral
2,*,
Antonio Ferrández
2 and
Mario Chacón-Rivas
1
1
Instituto Tecnológico de Costa Rica, Inclutec, 30101 Cartago, Costa Rica
2
Department of Software and Computing Systems, University of Alicante, San Vicente del Raspeig, 03690 Alicante, Spain
*
Authors to whom correspondence should be addressed.
Electronics 2019, 8(9), 1047; https://doi.org/10.3390/electronics8091047
Submission received: 12 August 2019 / Revised: 6 September 2019 / Accepted: 14 September 2019 / Published: 18 September 2019
(This article belongs to the Section Artificial Intelligence)

Abstract

:
Sign languages (SL) are the first language for most deaf people. Consequently, bidirectional communication among deaf and non-deaf people has always been a challenging issue. Sign language usage has increased due to inclusion policies and general public agreement, which must then become evident in information technologies, in the many facets that comprise sign language understanding and its computational treatment. In this study, we conduct a thorough systematic mapping of translation-enabling technologies for sign languages. This mapping has considered the most recommended guidelines for systematic reviews, i.e., those pertaining software engineering, since there is a need to account for interdisciplinary areas of accessibility, human computer interaction, natural language processing, and education, all of them part of ACM (Association for Computing Machinery) computing classification system directly related to software engineering. An ongoing development of a software tool called SYMPLE (SYstematic Mapping and Parallel Loading Engine) facilitated the querying and construction of a base set of candidate studies. A great diversity of topics has been studied over the last 25 years or so, but this systematic mapping allows for comfortable visualization of predominant areas, venues, top authors, and different measures of concentration and dispersion. The systematic review clearly shows a large number of classifications and subclassifications interspersed over time. This is an area of study in which there is much interest, with a basically steady level of scientific publications over the last decade, concentrated mainly in the European continent. The publications by country, nevertheless, usually favor their local sign language.

1. Introduction

This study arises from the need to have a broad outlook into sign languages (SL) and their treatment by computational means, motivated by a lack of research evidence that provides structure to this area, which is transdisciplinary by nature. We spot four areas that have to do most clearly with the problem at hand: accessibility, human-computer interaction, natural language processing, and education. This study is particularly relevant because there are no systematic mappings covering all areas that conform translation-enabling technologies, as far as the authors are aware. The importance of the topic led us to formulate this study to understand what the scholar community has contributed in the areas that integrate the processing of sign languages through computers. Figure 1a,b shows, respectively, the results after using a search engine of scholarly documents (Google Scholar) for mappings and reviews. The indexing services of Scopus and Web of Science were also used to corroborate the searches. The search strings were demanding, in the sense that titles were meant to contain the keywords “systematic [mapping/review]” and “sign language”, hence excluding beforehand those titles that were only vaguely related to the object of study. In fact, for systematic mappings we obtained no results, while systematic reviews returned four results, three of them dealing with health sciences topics (with no computational focus) and one of them just recently published [1] that has to do with specifics of teaching LIBRAS (Brazilian Portuguese Sign Language). The results of this study show that the pedagogical approaches and theories used in the planning and construction of tools for LIBRAS are perfunctory.
The systematic studies pertaining to medical sciences have to do with the multifactorial elements of musculoskeletal disorder pathologies among SL interpreters [2]. For [3] the aim was to provide a tool for parents, clinicians, researchers, and decision-makers who are looking for evidence in the field of newborn screening, as well as early intervention outcomes, by means of a better understanding of treatments and a timelier introduction of the most effective interventions. On the other hand, a study reported on the outcomes of children with cochlear implants and found that language development was the most frequently reported, followed by speech and speech perception [4].
The lack of systematic studies that integrate the four major areas of interest as well as the appearance of results unrelated to the object of study were both cogent indicators of the need to conduct this research. There is only one result related to the area of interest, specifically addressing education in LIBRAS (Brazilian Portuguese Sign Language), which we will take into account further on in this study.
This systematic study adheres largely to the guidelines suggested by [5,6] and makes the following contributions:
  • Provides the scholarly community interested in translation-enabling technologies for sign languages with a broad vision on the subject.
  • Quantifies the categories, subcategories, and other relevant criteria that allow sectioning the object of study.
  • Displays the results by means of different data visualization techniques.
The remainder of the paper structure is as follows: Section 2 deals with the background and related work, Section 3 explains the research method in detail, Section 4 presents the results of the systematic mapping, Section 5 provides an evaluation of the mapping process, Section 6 discusses the results, and Section 7 concludes the paper.

2. Background and Related Work

2.1. Sign Languages Overview

Sign languages are powerful means of communication due to their great expressiveness. In many countries such as Costa Rica, Spain, New Zealand, Thailand, and South Africa, they have been declared an official language [7]. In order to achieve a higher level of inclusion, translation systems have been designed between their main users, deaf people, and the rest of the community. Their computational treatment, however, is complex and requires integrating several elements, such as the combination of manual signs with facial gestures, compliance with linguistic precepts, and particularities of the geographical region of the signers. The deaf culture plays a preponderant role in the conceptualization and evaluation of all efforts to generate projects with a broad impact on society.
Sign languages that have undergone a process of maturation have well-defined grammars, parallel corpora, and datasets for the purpose of experimentation. Algorithms are proposed and documented against a baseline to determine their effectiveness. In these efforts, computer scientists, linguists, educators, and members of the deaf community participate in order to deal with the complexities of a phenomenon as broad as that of communication.
The many translation-enabling technologies available for sign languages can be conveniently grouped into four categories [8] that clearly stand out: accessibility, human-computer interaction, natural language processing, and education. The difference between computational linguistics and natural language processing is still a matter of debate, as discussed by [9], and is not the focus of this study. In this case, we will refer generically to this area of study as natural language processing. These areas are by no means exclusive; quite the contrary, they are complementary, and a real system aimed at people with disabilities is expected to adopt these categories in an appropriate manner. A machine translation system must contemplate all these components to produce not only correct translations but also to endow the users and the community around them with elements of success in the solution offered.

2.2. Technologies Used in SL Machine Translation

The authors in [10] studied potential technology solutions for e-learning platforms through translation of sign language. They presented a list of potential technology options for the recognition, translation and presentation of SL, as well as potential problems, by analyzing assistive technologies, methods, and techniques. Their analysis shows that some technology solutions are under research and development to be available for digital environments. However, some critical challenges must be solved, and a strong integration of these technologies in e-learning platforms is still missing since there are no immediate solutions to solve synchronous real-time communications between deaf and not deaf people.
Avatars are widely used. [11] developed an agent with a high level of detail that represents gestures in Spanish sign language. In several research departments, an attempt has been made to couple the recognition of gestures with the shapes and movements of hands, arms, and trunk, but it has been reported that the great problem is the construction of an animation.
Some approaches use voice recognition techniques to translate from spoken language into sign language, while other approaches translate written into sign language [12,13,14,15,16,17]. The authors in [11] note that voice recognition limits its action to specific domains and are not very efficient, at a rate of 8 s per sentence (an impractical solution for real time). Sign languages are natural and evolve over time, which implies a need to update the grammatical basis. Moreover, [18] indicates that grammar and double meaning gestures make translation extremely complicated.
One of the research goals for mapping studies is to determine the necessity to undertake a full systematic review [19]. In this study, this necessity has become quite evident since there is no coherent body of knowledge tying together the many technological components towards high quality solutions.

2.3. Applications Currently Available

In addition to the great efforts that have been made within the scholar community, there are already several industry systems available to the public. The role that the industry plays partly feeds on ideas that have emerged in the academy, but additional ideas can also contribute to it and verify or refute arguments that require private investment to be put into practice. The authors wish to emphasize that this section complements the findings of the systematic search coming from the results obtained from a general-purpose search engine, not in academic repositories. In the next subsections the reader will find a description of these systems.

2.3.1. Mobile Applications Already Available

It should be noted that the effort invested by the developers of these proposals is of great importance as they make available to users a range of applications for widespread use through the use of mobile phones.
Hand Talk performs digital and automatic translation into Brazilian Sign Language (LIBRAS) through two main products: a website translator, which makes websites accessible in Libras by inserting a button, and an application that takes text or audio as an input and automatically translates it. Their developers remark that these products are complementary to the work of the LIBRAS interpreters [20].
The purpose of Helloasl is to assist the American Sign Language (ASL) learning process. According to their authors, it enables people to meet and interact in a convenient and enjoyable learning experience beyond the basics [21]. They offer interested people an application and a website both designed for learning purposes.
Visualfy is a product developed by Marc Tamarit and Toni Alcalde, consisting of a network of connected microphones so that deaf people place them in the plugs of their home. Microphones listen to common household sounds and translate them into visual signals so that deaf people can interpret them easily [22].

2.3.2. Applications for Web, Windows, and Android

TextoSIGN is a dictionary that converts text into Spanish Sign Language (LSE). The text to be converted is entered into a search box, and after that, a video with an animated 3D avatar is generated. TextoSIGN has about 1500 words, which will increase in future updates. Words can be checked by categories and added to favorites for quick access. There is a lite version, free and limited, and the full and paid version. For now, it works on the Windows platform and is available for Android, but it is planned that it will soon reach the Apple App Store [23].
Signslator is a Flash-enabled website where a text is written and translated into sign language. It is possible to read the words in real time as the user writes them, and it is even possible to change the words in a sentence to gradually learn the sign language. It can translate more than 12,000 words [24].

2.3.3. Other Applications

MyVoice is a project of some students from the University of Houston. It is a device in the prototype phase and is responsible for converting sign language to voice. It reads the gestures and symbols of the signing person and translates them into words read to those who do not master that language. The equipment is portable and includes a microphone, speaker, video camera, and screen. The reverse process is also possible with the help of the screen: the user speaks to MyVoice and the equivalent sign language appears on the screen [25].

2.3.4. Wearables Incorporating into SL Translation

Hadeel Ayoub, a student at the Goldsmiths Institute at the University of London, developed the SignLanguageGlove, a glove that offers converting gestures into understandable text, displayed on a screen or through a speaker integrated into the glove. It uses five flex sensors, located on each finger, connected to a motherboard that sends the information to a four-digit display or to the speaker [26].
At the Instituto Politécnico Nacional (IPN) in Mexico, a prototype glove was developed to translate sign language into text. After recognizing the signal indicated by the user, the glove sends the information to a mobile phone via Bluetooth so that an application, already published on Google Play, translates it into text. The development is a prototype so it needs improvements, and there are no concrete future marketing plans [27].

2.3.5. Real Time SL to Text and Speech

Fundación Vodafone, located in Spain, a mobile phone operator with headquarters in the United Kingdom, presented a proposal called Showleap to facilitate communication between listeners and deaf people. The founders of the project, Teo Atienza and Emilio Guerra, indicate that the software tries to “build the translator based on what deaf people demand”. Initially, two bracelets were used that worked well with few signs but failed to increase the database to 20,000 signs. More importantly, deaf people who tried them said that “they didn’t want to have to put on anything extra to start a normal conversation”. Hence, they decided to change the bracelets for a camera in the user’s terminal that detects movements and recognizes images of the person who is signing. Software, which works on the user’s mobile, tablet, or laptop, translates the signs in real time and converts them to text and voice. When the hearing person speaks to the deaf person, the application performs the reverse process, converting the words to text for the deaf person to read on their device [28].

2.3.6. Systems Incorporating Deep Learning

The aforementioned Showleap uses deep learning techniques, as well as a program that consists of three neural networks: the first processes the video, the second identifies the signs and interprets them, and the third joins the signs and gives meaning to the phrase [29].
Evalk, a Netherlands-based start-up, has developed an artificial intelligence (AI) powered application for deaf people, which promises a low-cost and superior approach to translating sign language into text and speech in real time. The digital interpreter works by placing a smartphone in front of the user while the application translates gestures and signs into text and speech. The app, called GnoSys, uses neural networks and computer vision for recognition and then translates into speech. Evalk executives state that the translation software in the market is slow or expensive, relying on old technology not suitable to scale to other markets outside country of origin. Their application can be used on multiple devices such as smartphones, tablets, laptops, or PCs. It translates quickly as the person speaks, translates any sign language, and can be plugged into a variety of products, such as video chats, AI assistants, etc. The interpreter for the deaf relies on neural networks. All the translation happens in the cloud. It requires a camera on the device facing the signing person and a connection to the Internet [30].
Students at the Berghs School of Communication in Sweden came up with the idea for the application as a means to enable signed conversation with hearing people. It requires a pair of wrist bands to track the motions of the signing person and send them to the Gesture app. The motions are translated into speech in real time. Using electromyography (a technique for recording the electrical activity produced by the skeletal muscles [31]), it analyzes positions and muscle activity in the hand and forearm. In this way, Google Gesture can identify the signs you are making. A release date for the application has not been announced [32].
Google AI Labs has developed an algorithm capable of tracking the movement of the user’s hands after having mapped them with the camera of their mobile. The solution uses machine learning to calculate the 21 key points in three dimensions of a hand within a video frame. To reduce the hardware requirements, they decided to reduce the amount of data the algorithm needs so that the response time is shorter. The position and size of the entire hand is no longer detected but the palm is as it is more distinctive and regular. Then, the fingers are detected.
A total of 30,000 images with different poses and lighting were analyzed. Google AI Labs researchers explain that the novelty of their proposal is that it breaks the current approach, based on powerful desktop environments, with good real-time performance despite working on mobile phones. They claim that they will try to improve accuracy and announce the availability of the source code for other researchers [33].
The Live Transcribe application, from Google, transcribes voice to text in real time for 70 languages, which represents a coverage of more than 80% of the world’s population. The application allows to provide automatic subtitles to the conversation that takes place around the user [34].

2.3.7. Systems for Teaching Deaf People to Read

Huawei has joined the European Union of the Deaf and the British Association of the Deaf, in addition to other companies, to create StorySign. It is an application based on artificial intelligence, which reads children’s books to convert them into sign language, teaching deaf children to read. The application currently supports ten languages and can run on any Android device version 6.0 or higher. The StorySign application uses Huawei Artificial Intelligence and the mobile camera to detect words. Its operation is simple: once the application is opened, a title is chosen from the StorySign library, and the mobile is held on the pages of the physical copy. The avatar translates the story into sign language according to how the underlined written words are. StorySign supports ten languages (English, French, German, Italian, Spanish, Dutch, Portuguese, Irish, Belgian Flemish, and Swiss German) with one book each. The goal of Huawei is to incorporate many more books in the future [35].

2.4. Findings and Challenges

Trends and limitations in sign language translation systems have been evident for the academic community and for the deaf community as well. For both recognition and synthesis purposes, systems normally limited to a particular domain have been developed, such as airports [36,37,38,39], train stations [40,41,42,43], or hospitals [44,45,46,47,48].
Classical approaches, such as those based on rules, that lead to the creation of dictionaries, with a great knowledge of languages still persist [49]. However, statistical approaches continue to prove extremely effective thanks to the use of parallel corpora between two languages [50]. A disadvantage is that in parallel corpora the phrases of a source or destination language may appear translated into several sentences in their counterpart [51]. Hybrid approaches have been studied that combine rules with statistics and yield very good results [52]. Hybrid systems of rules post-processed by statistics or the inverse approach of rule-guided statistics have been proposed.
With the data collected in this investigation, a significant trend has been detected to prefer rule-based systems, growing intermittently between 2008 and 2018, and statistical-based systems, growing continuously between 2003 and 2013. Systems based in examples or using machine learning techniques still do not represent an important trend in synthesis from written language to sign language, with a slight rebound between the years 2013 to 2017 (from 2013, the methods based on examples have reached a plateau as an option for sign languages). The picture is radically different when it comes to sign recognition towards written language, with a clear trend in the use of machine learning, particularly deep learning with multiple layers [53,54] and to a large extent the use of the statistical approach. Moreover, recent proposals combine deep learning with statistical methods [55,56].
Some studies, such as that of [57], suggest adopting sketch recognition techniques for sign language recognition. In the field of sketch recognition, contributions have been made in grammars and language compilers with good results, which could be experimented in the sign recognition phase [58,59]. This would be an interesting starting point, although there is still much work to do with the treatment of the epenthesis, also known as “movement epenthesis”, that occurs between signs while the hands are moving from the posture required by the first sign to that required by the next one [60].
Recognition systems face serious difficulties in capturing the signs in real time, particularly with common use devices such as cell phones, and without requiring the signer to use additional electronic equipment, usually uncomfortable to wear in a day to day setting. In addition, the noise generated by all background images in a real-time environment is an open problem for research purposes.
Synthesis systems, meanwhile, have serious challenges to map text in written language to sign languages, usually reproduced by a signing avatar, mainly because sign languages have a much smaller lexicon.
On the other hand, there are very few contributions in specific research on the management of anaphora [61,62,63], that is, references to entities in previous discourse, and the treatment of ellipsis or omission of deductible words by context [64,65,66], which usually lowers the quality of translations.
The disambiguation of words with different meanings has been approached from different perspectives, with partially satisfactory results, mainly with contributions from superficial approaches that have no knowledge of the text but instead apply statistical methods to words that are close to the ambiguous word. Deep approaches assume full knowledge of the word, which presumes a high cost. The superficial approaches, however, have proved more successful [67].
The recognition of named entities is easily resolved only within specific domains. Attempts have been made to broaden its range of action, but the results are much more limited. Even worse, by including recognition methods of named entities, a reduction in the BLEU is frequent [68]. The BLEU metric (Bilingual Evaluation Understudy) is of standard use for machine translation evaluations [69]. Another common metric is WER (Word Error Rate), a measure of the changes needed in the words in a phrase to turn it into another one [70].
Non-standard speech is one of the major limitations in this field of research since rule-based translation, by its very nature, does not include non-standard uses. This causes errors when carrying out the translation process. The construction of parallel corpora with rhetorical language has not been addressed when writing this article.

3. Methods

3.1. Research Questions

The goal of this systematic mapping study, based on the updated guidelines in [5], is to determine how sign language translation-enabling technologies have been approached since the first known seminal works. Hence, our research questions (RQs) are as follows:
  • RQ1: How often the topics of interest have been published?
  • RQ2: Which specific topics have been addressed?
  • RQ3: Where and when were the studies published?
  • RQ4: How were the proposals, implementation, or evaluation processes conducted?
  • RQ5: Which proposals have derived in specific products?
  • RQ6: What are the research trends and gaps?
This information is then used to synthesize the knowledge base around this subject. Next, we present the devised search protocol.

3.2. Search

We have chosen to search in scholarly repositories, based on pre-designed search strings. Another possibility is to start from a known set of articles and from there perform backward snowballing to obtain the articles referenced in this base set. We have opted for the first option since snowballing is most used to extend a review already carried out, and it still has some important features left for further research, like that of identifying a good start set [71].
We use PICO (Population, Intervention, Comparison and Outcomes) as suggested by [6] both to help identify the most relevant keywords and to formulate search strings directly deriving from the research questions.
  • Population: In sign languages context, population may refer to specific translation techniques, avatar deployment, application areas, or specific projects. In our context, the population is composed of sign languages, avatars, and translation studies.
  • Intervention: In sign languages, intervention refers to methodologies, tools, or technologies. We do not have a specific intervention to be investigated.
  • Comparison: In this study, we compare the different proposals, implementations, and evaluations by identifying the strategies used. No empirical comparison is made, but the alternative strategies are identified.
  • Outcomes: The number of identified initiatives.
The identified keywords are “sign languages”, “avatar”, and “translation”. These words were not grouped into bigrams since the scope of the study is intentionally left as broad as possible, not even recurring to search modifiers.
This study has been conducted during 2018, after performing an initial web scraping on 2 April. The full years of 2017 and before were considered during the search.
We first launched our query based on an API (Application Programming Interface) provided by [72], which provides access to Google Scholar from Python code, to determine the cardinality of the subject of study. We used an in-house tool still under development, which facilitated to some extent the gathering of the papers by scraping the Web. Nevertheless, not all the studies were publicly available, hence the need to look for directly in the repositories of IEEE Xplore, ACM Digital Library, Scopus, Clarivate, and arXiv in some cases. We did not search the predefined strings in there but looked for specific articles. It is worth mentioning that, as of now, Google Scholar indexes most of the contents of these repositories, publicly available or not, until a few weeks ago in the current year.
To achieve our research purposes, we used the following search string: “sign languages” avatar translation.

3.3. Data Extraction

To extract data from the identified primary studies, we developed the template shown in Table 1. Each data extraction field has a data item and a value. The first author performed the extraction, and then the fourth author reviewed it by tracing back the information in the extraction form to the statements in each paper, checking their rightness. Having another author check the extraction is considered a good practice in systematic reviews [73].

3.4. Analysis and Classification

The information for the extracted items was visually illustrated (see Section 4). The extracted data has been grouped by theme by the second, the third, and the fourth authors during analysis. Then, the papers belonging to each theme were counted.

3.5. Validity Evaluation

The following types of validity should be taken into account: descriptive validity, theoretical validity, generalizability, and interpretive validity. The repeatability (or dependability or reproducibility) follows from the previous ones [74]:
  • If conclusions cannot be drawn from the data (interpretative validity), it is most likely to draw different conclusions, assuming the research can be repeated
  • If there is no generalizability, the study cannot be repeated in different contexts for comparison purposes
  • If there are no means to collect correct data, it is likely to get different results when measuring the same attributes.

3.5.1. Descriptive Validity

Descriptive validity is extremely important, no matter whether one is dealing with quantitative or qualitative studies. Nevertheless, the quantitative nature of this study greatly reduces this threat. The primary studies have been kept collected in an online worksheet, in order to perform sorting, clustering, and filtering operations as needed. The worksheet is available as per direct request to the authors for any checking that might be needed. Therefore, we consider that this threat is under control.

3.5.2. Theoretical Validity

The theoretical validity of this study finds its roots in capturing the essence of the object of study. We explicitly explain the possible biases, whenever detected.
Study identification/sampling: By intentionally covering such a large object of study, the use of backward snowballing techniques was impractical. However, the search string used is sufficiently expressive to cover a large number and variety of studies, so we considered that the risk of missing some studies is quite low.
We did not recur to forward snowballing techniques either, but indeed we included newly published studies as we wrote this article, through the Google Alerts system (see Figure 2). Four out of ten studies were gathered a posteriori, and their author lists included four out of ten previously identified top authors. This was done in order to keep the study as up to date as possible, as well as lowering the threat that conducting the extraction of data by one researcher poses on validity.
We conducted the study during 2018 and wrote the report during that same year. Studies from 2018 and earlier are included in our analysis. We identified a total of 904 studies, which covered the different areas of interest (see Figure 3).
There is always room for another potential threat since the activities are only those reported by the authors. As a palliative measure, the fourth author checked the extraction.
Data extraction and classification: During this phase researcher bias is also a threat. Authors in [6] indicate that it is useful to have one author extract the data and another one reviewing. To reduce the bias, the fourth author assessed all extractions and suggested new ones. The threat cannot be eliminated, though, since there is human judgement involved.

3.5.3. Generalizability

Most identified technologies are addressed recurrently in the literature. There is always, however, the possibility that emerging technologies are poorly represented in a systematic mapping, especially considering that the community often names them in different ways, and it is not until after a few years that there is a consensus on the appropriate nomenclature. In fact, this is not an exclusive feature of our object of study, but it is rather very common in the world of technology, business, and their common areas.

3.5.4. Interpretive Validity

We have not detected a bias on the part of the authors in this respect, given that only one of them is co-author of one of the many included studies. Moreover, the practical experience conducting systematic review processes can help in the interpretation of disaggregated and clustered data, hence reinforcing the answers to the research questions and the conclusions of the study.

3.5.5. Repeatability

To achieve repeatability, reports must be submitted detailing the methodology followed for systematic mapping. In our case, it is based on the guidelines used by the systematic mapping community. The authors have provided evidence about this process, making the data available to the interested party and have alerted of the possible threats to the validity [75].

4. Results

4.1. Frequency of Publication (RQ1)

Figure 3 shows the number of mapping studies identified within the years 1996–2018. A few articles that were published most likely prior to 1996 appear undated (six in total). The first dated study was published by [76], the only one in that year. Figure 3 also shows the trends in the different areas that make up the object of study. For the most part, certain constancy or upward trend is noted over the years. Likewise, some gaps are shown, such as that of Rule-based around 2010–2011 or that of Example-based since 2013. These results help answer RQ6, along with what is stated in Section 2.4. “Findings and challenges”.
While the interest in these studies was moderately increasing around 2002, a greater increase and diversification can be observed from 2005 on. Besides an increased interest, some important areas like sign language grammars and corpora conformation started emerging around those years.
This evident increase in the number of studies published indicates that this area is considered highly relevant by an ample sector of the research community. In fact, it is not unusual to find systematic studies where the initial set of papers was much bigger than the filtered final results. When that situation happens, this may be indicative of a need to refine the search strings and/or the inclusion and exclusion criteria.

4.2. Topics (RQ2)

The topics covered were derived from the ACM Computing Classification System (CCS) [77]. The basic categories were Accessibility, Natural Language Processing, Human-Computer Interaction, and Education. These categories were carefully chosen within the CCS. They were discussed among the authors until reaching a consensus about those categories that better reflected the different areas that make up the object of study, namely, the enabling technologies for sign languages.
Starting from those basic categories, some others arose naturally, for instance “Avatar” and “Translation”. Figure 3 shows the magnitude of mapping articles per category in a broad timeline.
It becomes evident that there is an emphasis in “Automatic Translation”, “Educational”, “Gesture or Sign Recognition”, “Avatar”, “Corpus”, and “SL Grammar”. We consider a relevant finding the fact that “Machine learning” is not still as a widespread topic as would be expected when dealing with SL.
The decline noted around 2018 is natural and is due to the fact that it was the current year when most of this research was carried out, particularly the automatic scraping performed in the Google Scholar indexing system.
As expected, there is not only the aforementioned decline, there are others over the years, as well as spikes. The space between the area below and the area above the category analyzed indicates its magnitude for the year or the years to be studied. The interested readers can, in this way, make their own findings. Note that the category “No technical” refers to studies without a computational focus (social impact studies, for instance).
Figure 4a shows a Sankey diagram of the ten most prolific authors in their active period (2007 to 2017). This type of diagram, also known as alluvial, provides the reader with a very comfortable way to visualize flows between two categories of analysis or study variables.
This particular flow among top authors and publication years seems homogeneously distributed, with the exception of author Mohamed Jemni, who clearly stands out with most publications.
The accumulation of publications for Jemni is also shown in a Sankey diagram in Figure 4b, for a total of 50 works, most of them concentrated in the years 2012 and 2013.
On the other hand, the top publication year is 2015, with 40 articles, followed by 2014 with 39 articles, as depicted in the last Sankey diagram in Figure 4c.
By 2015, the participation of Mohamed Jemni was more balanced compared to the rest of the top authors, with a fairly equitable distribution. Even so, although not the top author in 2014 and 2015, Mohamed Jemni still accounted for 14% of the publications during those two years.
The interactive nature of the alluvial diagrams makes it possible to perform analysis similar to those just presented for each category of analysis by means of a data diagramming tool.
Figure 5 shows a very interesting relation between classifications and subclassifications in a bubble chart that allows measuring the magnitude of this intersection. Some results are predictable, while others are less obvious and striking, such as “Education” and “Notation”, with an important quote of works published. On a closer look, this relationship holds for papers dealing with means of interacting for educational purposes, which might come in the expected form of text and avatars or by using standard notations, such as Stokoe [123] or Hamnosys [124], which would be a prerequisite for the communication process.
Another interesting view is portrayed in Figure 6, conceived to relate top 10 authors and classifications. It facilitates to determine that Jemni [125,126,127], San Segundo [128,129], and López-Ludeña [130,131,132,133] are mandatory references when dealing with automatic translation. On the other hand, Braffort [95,134] and Kacorri [135,136], who also integrate the top 10 authors, are more “citable” when dealing with corpus conformation or animation techniques, respectively.
SL grammars, to a lesser degree, have also been addressed, not only by linguists but also by computer scientists.
Table 2 shows a visualization for the top 3 co-occurrence of classifications and their subclassifications (a thorough version of this table is provided in [137]). We assigned each paper a general classification and automatically extracted its possible subclassifications from a list of keywords occurring either in its title or its abstract. This display is indicative, for instance, of the great efforts that have been done in automatic translation, animation techniques, avatars, and recognition. Another gain from doing this exercise is to corroborate the robustness of the search strings. In our case, “sign language”, “translation”, and “text” appear with great frequency in the title or abstract.
Another interesting display is on Figure 7, relating classifications and top authors in a heat map. The reader can see at a glance that the most relevant row in the map is, indeed, “Automatic Translation”, and the most important column is “Mohamed Jemni”, intersecting in the second cell, with a score of 17. This is only surpassed by author San Segundo in the same category and a score of 19.

4.3. Venues of Publication (RQ3)

We have selected a tree map to visualize the venues of publications. Table 3 provides an overview of how the articles map to these venues. International conferences clearly outscore scientific journals. However, these two venues together outscore all others. We have intentionally left the patents in this visualization, only to demonstrate the low level of claim that exists regarding intellectual property in our object of study, with only six patents granted, all of them in the United States. This situation is shown in Table 4.
The situation depicted is clearly indicative that this kind of studies are regarded as both valid and valuable scientific contributions, since they are widely published in sound scholarly forums.

4.4. Approaches (RQ4 and RQ5)

In Figure 8, an area chart is shown comprising research teams and years. There is no clear evidence of sustained predominance over the years; in fact, it is quite variable. The only matter worth highlighting is that authors Matt Huenerfauth from Rochester Institute of Technology and Hernisa Kacorri from University of Maryland have been dominating the scene in the last few years [144,145,146,147,148].
Paula Escudeiro, Nuno Escudeiro, Marcelo Norberto, and Jorge Lopes, all of them affiliated to Instituto Superior de Engenharia do Porto, Portugal, appear in second place as a joint team of publication in the last few years [149,150,151,152].
Figure 9 shows very interesting data, namely, the distribution of studies in specific sign languages by country. The graph shows conclusively that each specific sign language has been addressed mainly in the country to which it belongs (reading horizontally on the graph).
On the other hand, if a vertical reading is made, it can be seen that the UK, India, and Tunisia have studied not only the sign languages belonging to their country.
In the case of the UK, the British Sign Language (BSL), the American Sign Language (ASL), and the Arabian Sign Language (ArSL) were studied, apart from an important cluster in the “others” group. In India, the research has focused on Indian Sign Language (ISL), the American Sign Language (ASL), the Spanish Sign Language (LSE), and another important cluster in the group “others”. In Tunisia, the main focus of research has been American Sign Language (ASL), in second place the Arabian Sign Language (ArSL), in third place the French Sign Language (LSF), and the “others” cluster also appears.
The “others” cluster, by the way, appears practically in all the countries of this study, possibly because researchers have sought to take other languages as a reference to reinforce local studies or because they have better linguistic resources to conduct research.

5. Mapping Process Evaluation

In accordance with the good practices recommended by [5], Table 5 shows the relevant actions to be considered in a systematic mapping and those that have been applied in this study, indicated with a check symbol (✔). The symbol “•” represents the actions that were not carried out in this study.
The authors considered the “Expert evaluates result” item in Table 5 not only a good practice, as suggested by [5], but a mandatory task to try to eliminate any undetected biases or shortages caused by their own experience in the field. Hence, a total of five work sessions were conducted among the first and fourth authors, and two expert researchers from Aspen University and Universidad de Costa Rica (see Acknowledgments section), both of them dealing with sign language recognition tasks. These forums proved valuable in order to evaluate the whole systematic search process.
Calculating the ratio of the number of actions taken in comparison to the total number of possible actions (12 out of 28) for this mapping, the ratio is 43%, which is significantly above the 33% median for systematic studies reported by [5]. We want to stress the quality analysis and debugging of data and graphs that we carried out for this study. Eliminating data that does not add value is a process we recommend including in the activities of Table 5.

6. Discussion

As for the techniques used, some well-known Natural Language Processing (NLP) tools are used, such as POS-taggers, parsing and to a much lesser extent the resolution of the anaphora or treatment of the ellipsis. Semantic analysis is central to most translation systems. Machine learning has been used much more intensively in recognition than in sign language synthesis. The use of avatars is extremely common for display purposes, as well as voice recognition to produce a translation. The techniques that have been used resort to the use of a predefined corpus to carry out evaluations on the effectiveness of the proposals. Each corpus adapts to the sign language studied, and it can even be adapted to variants by geographical regions. However, it is very difficult to find a corpus endorsed by an organization responsible for regulating sign language. Even in widely studied languages, such as ASL, efforts have been mainly proposed by research centers, notably the CUNY ASL [91,153], or the corpus of the Center for Linguistic Standardization of the Spanish Sign Language (CNLSE corpus) [154].
The most researched languages by the academic community are ASL (American Sign Language), LSE (Spanish Sign Language), ArSL (Arabian Sign Language), LSF (French Sign Language), ISL (Indian Sign Language), LIBRAS (Brazilian Sign Language), SaSL (South African Sign Language), GSL (Greek Sign Language), BSL (British Sign Language), LIS (Italian Sign Language), and LGP (Portuguese Sign Language). The best-known projects are ATLAS (for LSI), AUSLAN (for Australian Sign Language), and WebSign (for ASL). The precision measurements normally used are BLEU with results between 70% and 80% and WER, which is usually between 20% and 30%. Automatic translation systems rely on the use of well-defined grammars at the source and destination or the use of massive data. In this sense, the translation into sign languages does not differ much in concept, but a research and development project may require much more time, due to the limited availability of these resources, including the same formal and normative definition of grammars.
The real-time existing solutions are limited to particular languages and restricted domains, leaving out many communities and areas of relevance. In particular, speech recognition requires very careful treatment and can easily become inefficient. On the other hand, sign languages are dynamic and require updating their grammar bases regularly, which also means a regular update on the software systems that implement them. Ideally, any technique of disambiguation and resolution of ellipsis and anaphora should be considered in every proposal, as well as having corpus labeled to test machine learning techniques. An ideal platform will undoubtedly allow to manage user profiles and adapt to regional variants. The proposals studied show a very clear orientation towards academic projects, which often lack a sustainable financing scheme and are not strongly projected to the community.

7. Conclusions

In this systematic mapping study, we found existing literature directly related to technologies meant to facilitate sign languages machine translation. Our evaluation reached the topics investigated, the frequency of publications, the venues of publications, and the specific approaches in use.
The motivation for this study was the lack of a coherent body of knowledge that would provide a comprehensive look into these technologies. In what follows, we answer the research questions of this mapping study.
RQ1, Frequency: The most prolific authors are Mohamed Jemni, Oussama El Ghoul, and Rubén San Segundo, with publications (for most of them) ranging from 2007 to 2018. It was evident that the frequency of publication was often motivated by the ample scope of this field of research.
RQ2, Topics: The topic areas covered were based on the ACM classification and the practical experience of the authors led to subclassifications. The classification for which the highest number of studies has been conducted is “Automatic translation”. The classifications “Educational”, “Gesture or Sign Recognition”, “Avatar”, “Corpus”, and “SL Grammar” are also of paramount importance in the field.
RQ3, Venues: More than 80% of the studies have been published in conferences and in reputable journals. We can conclude that these types of studies are considered valuable scientific contributions. The number of studies has increased and kept basically steady since 2005.
RQ4, Approaches: We identified the approaches as well as their application frequency. We followed the suggested evaluation procedure of systematic mapping studies and obtained results above what could be considered the baseline.
RQ5, Specific products: A classification and subclassification co-occurrence frequency display showed that there are seven clearly identified projects (their main assigned classification is precisely a “project”).
RQ6, Trends and gaps: An entire section (2.4. Findings and challenges) explains in detail the trends and gaps of the object of study. The trends set a clear course towards new data-centric systems and the hybridization of rule-based with statistic-based. The inclusion of rule-based imposes prior knowledge about the languages of origin and destination, which must be taken into account by the research teams before deciding to use this approach. The gaps have been the same for a long time: mainly the adjustment to well-defined domains, the difficulty to naturally reflect the epenthesis, and an almost total absence of solution of the anaphora and ellipsis.
Since the goal of this mapping study was to provide an overview of the field as broad as possible, we had to make an important effort in gathering a big amount of information. We do not claim that this is always the best course of action since it depends heavily on the particular objective of the research team. In point of fact, comparisons have still to be made by the community on the different strategies of search (repositories, manual search, and snowballing) to determine a reliable way to obtain a sample size.
This study has addressed, aside from academic contributions, industry proposals, some still in the prototype phase. The technologies used so far in the industry for the synthesis and recognition of sign languages show a very clear predilection for incorporating the use of wearables, as well as testing deep learning based prototypes.
In the opinion of the deaf community itself, there is an important dysfunctional aspect which is the fact that the majority of industry proposals focus on wearables, since this prevents conducting a natural conversation, apart from the need of taking care of the device for external use. Sign language synthesis, on the other hand, is reported in the industry as an area that is approached with enthusiasm but is still at an early stage of development, and no benchmarks have been detected that compare their relative advantages and disadvantages. Very desirable characteristics in a synthesis system, such as the treatment of anaphora and ellipsis, have barely been addressed by the academy and are not mentioned in industry efforts.
Finally, the tendency to delimit proposals to specific domains is very clear. Statistical-based and rule-based systems continue to have a leading role, as well as their hybridization, since the requirement for large volumes of data for training continues to represent a gap for many sign languages that do not have large collections for training and testing, the main component of data-centric machine learning approaches.

Funding

The authors thank the School of Computing and the Computer Research Center of the Technological Institute of Costa Rica for the financial support, as well as CONICIT (Consejo Nacional para Investigaciones Científicas y Tecnológicas), Costa Rica, under grant 290-2006. The support of our partners from the design department at Inclutec has been crucial to achieve high-quality graphic displays. The feedback of Luis Quesada, Ph.D., from Universidad de Costa Rica and doctoral student Juan Zamora, from Aspen University, regarding adequate form and concept, as well as evaluating the systematic search process, allowed to conceive a definitive version of the paper. This work was partly supported by the Spanish Ministry of Science, Innovation, and Universities through the Project ECLIPSE-UA under Grant RTI2018-094283-B-C32 and the Project INTEGER under Grant RTI2018-094649-B-I00, and partly by the Conselleria de Educación, Investigación, Cultura y Deporte of the Community of Valencia, Spain, within the Project PROMETEO/2018/089.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ribeiro, P.; Lima, R.; Queiroz, P. Tecnologias para o Ensino da Língua Brasileira de Sinais (LIBRAS): Uma Revisão Sistemática da Literatura. Braz. J. Comput. Educ. 2018, 26, 42–60. [Google Scholar] [CrossRef]
  2. Fischer, S.L.; Marshall, M.M.; Woodcock, K. Musculoskeletal disorders in sign language interpreters: A systematic review and conceptual model of musculoskeletal disorder development. Work 2012, 42, 173–184. [Google Scholar] [PubMed]
  3. Fitzpatrick, E.; Stevens, A.; Garritty, C.; Moher, D. The effects of sign language on spoken language acquisition in children with hearing loss: A systematic review protocol. Syst. Rev. 2013, 2, 108. [Google Scholar] [CrossRef] [PubMed]
  4. Fitzpatrick, E.; Hamel, C.; Stevens, A.; Pratt, M.; Moher, D.; Doucet, S.P.; Neuss, D.; Bernstein, A.; Na, E. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review. Pediatrics 2016, 137, e20151974. [Google Scholar] [CrossRef] [PubMed]
  5. Petersen, K.; Vakkalanka, S.; Kuzniarz, L. Guidelines for conducting systematic mapping studies in software engineering: An update. Inf. Softw. Technol. 2015, 64, 1–18. [Google Scholar] [CrossRef]
  6. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report EBSE-2007-01; EBSE: UK, Durham, 2007. [Google Scholar]
  7. Ethnologue. Languages of the World. 2019. Available online: https://www.ethnologue.com/ (accessed on 29 June 2019).
  8. Parton, B.S. Sign language recognition and translation: A multidisciplined approach from the field of artificial intelligence. J. Deaf Stud. Deaf Educ. 2005, 11, 94–101. [Google Scholar] [CrossRef] [PubMed]
  9. Tsujii, J. Computational Linguistics and Natural Language Processing. In Computational Linguistics and Intelligent Text Processing; Gelbukh, A.F., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  10. Martins, P.; Rodrigues, H.; Rocha, T.; Francisco, M.; Morgado, L. Accessible options for Deaf people in e-Learning platforms: Technology solutions for Sign Language translation. Procedia Comput. Sci. 2015, 67, 263–272. [Google Scholar] [CrossRef]
  11. San-Segundo, R.; Montero, J.; Macías-Guarasa, J.; Córdoba, R.; Ferreiros, J.; Pardo, J. Proposing a speech to gesture translation architecture for Spanish deaf people. J. Vis. Lang. Comput. 2008, 5, 523–538. [Google Scholar] [CrossRef]
  12. Veale, T.; Conway, A.; Collins, B. The challenges of cross-modal translation: English to sign language translation in the Zardoz system. Mach. Transl. 1998, 13, 81–106. [Google Scholar] [CrossRef]
  13. Zhao, L.; Kipper, K.; Schuler, W.; Vogler, C.; Badler, N.; Palmer, M. Machine translation system from English to American Sign Language. Lect. Notes Comput. Sci. 2000, 1934, 54–67. [Google Scholar]
  14. Naert, L.; Larboulette, C.; Gibet, S. Coarticulation Analysis for Sign Language Synthesis. In Proceedings of the Part II of the 11th International Conference, UAHCI 2017, Vancouver, BC, Canada, 9–14 July 2017. [Google Scholar]
  15. Huenerfauth, M. Generating American sign language animation: Overcoming misconceptions and technical challenges. Univers. Access Inf. Soc. 2008, 6, 419–434. [Google Scholar] [CrossRef]
  16. Anuja, K.; Suryapriya, S.; Idicula, S. Design and development of a frame based MT system for English-to-ISL. In Proceedings of the World Congress on Nature & Biologically Inspired Computing (NaBIC’2009), Coimbatore, India, 9–11 December 2009; pp. 1382–1387. [Google Scholar]
  17. López-Colino, F.; Colás, J. Spanish sign language synthesis system. J. Visual Lang. Comput. 2012, 23, 121–136. [Google Scholar] [CrossRef]
  18. Cooper, H.; Holt, B.; Bowden, R. Sign language recognition. In Visual Analysis of Human; Springer: London, UK, 2011; pp. 539–562. [Google Scholar]
  19. Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Meth. 2005, 8, 19–32. [Google Scholar] [CrossRef]
  20. Handtalk. Hand Talk Translator. 2019. Available online: https://play.google.com/store/apps/details?id=br.com.handtalk&hl=en_US (accessed on 29 August 2019).
  21. Helloasl. ASL American Sign Language. 2019. Available online: https://play.google.com/store/apps/details?id=tenmb.asl.americansignlanguagepro&hl=en_US (accessed on 29 August 2019).
  22. López, M. Visualfy, la Idea Española que Ofrece un Asistente Virtual a Las Personas Sordas. 2019. Available online: https://www.xataka.com/otros-dispositivos/visualfy-idea-espanola-que-ofrece-asistente-virtual-a-personas-sordas (accessed on 29 August 2019).
  23. Raya. textoSIGN, una Útil Herramienta de Conversión de Texto a Lengua de Signos Española para Android. 2012. Available online: https://www.xatakamovil.com/aplicaciones/textosign-una-util-herramienta-de-conversion-de-texto-a-lengua-de-signos-espanola-para-android (accessed on 29 August 2019).
  24. López, M. Singslator Traduce del Español a la Lengua de Signos Directamente Desde la Web. 2014. Available online: https://www.genbeta.com/web/singslator-traduce-del-espanol-a-la-lengua-de-signos-directamente-desde-la-web (accessed on 29 August 2019).
  25. Penalva, J. MyVoice Convierte la Lengua de Signos en Voz. 2012. Available online: https://www.xataka.com/otros/myvoice-convierte-el-lenguaje-de-signos-en-voz (accessed on 29 August 2019).
  26. Álvarez, R. Si no Conoces el Lenguaje de Signos, este Guante es Capaz de Traducirlo en Voz y Texto. 2015. Available online: https://www.xataka.com/investigacion/si-no-conocer-el-el-lenguaje-de-signos-este-guante-es-capaz-de-traducirlo-en-voz-y-texto (accessed on 29 August 2019).
  27. Garrido, R. Con este Guante Creado en el IPN Pretenden Traducir la Lengua de Señas a Texto. 2015. Available online: https://www.xataka.com.mx/investigacion/con-este-guante-creado-en-el-ipn-pretenden-traducir-la-lengua-de-senas-a-texto (accessed on 29 August 2019).
  28. Sacristán, L. Un Traductor de Lengua de Signos y un Wearable que Detecta la Epilepsia entre los Nuevos Proyectos de la Fundación Vodafone. 2019. Available online: https://www.xatakamovil.com/vodafone/traductor-lengua-signos-wearable-que-detecta-epilepsia-nuevos-proyectos-fundacion-vodafone (accessed on 29 August 2019).
  29. Sacristán, L. Así es Showleap: El Traductor de Lengua de Signos a Texto y Voz en Tiempo Real Está Cada Vez Más Cerca. 2019. Available online: https://www.xataka.com/aplicaciones/asi-showleap-traductor-lengua-signos-a-texto-voz-tiempo-real-esta-cada-vez-cerca (accessed on 29 August 2019).
  30. The Economic Times Meet the New Google Translator: An AI App That Converts Sign Language into Text, Speech. 2018. Available online: https://economictimes.indiatimes.com/magazines/panache/meet-the-new-google-translator-an-ai-app-that-converts-sign-language-into-text-speech/articleshow/66379450.cms (accessed on 29 August 2019).
  31. Kamen, G. Electromyographic Kinesiology. In Research Methods in Biomechanics; Robertson, G.E., Caldwell, G.E., Hamill, J., Kamen, G., Whittlesey, S., Eds.; Human Kinetics Publishers: Champaign, IL, USA, 2004. [Google Scholar]
  32. Bailey, J. Google App Translates Sign Language. 2014. Available online: https://www.ajc.com/technology/google-app-translates-sign-language/wgmYzp46ALU5EyEmejOiMM/ (accessed on 29 August 2019).
  33. Merino, M. Un Algoritmo que Lee el Movimiento de las Manos Abre la Puerta a que los Smartphones Puedan Traducir el Lenguaje de Signos. 2019. Available online: https://www.xataka.com/inteligencia-artificial/algoritmo-que-lee-movimiento-manos-abre-puerta-a-que-smartphones-puedan-traducir-lenguaje-signos (accessed on 29 August 2019).
  34. Merino, M. Google Apuesta por el Reconocimiento de Voz Para Ayudar a que las Personas Sordas Tengan más Fácil Interactuar en Eventos Sociales. 2019. Available online: https://www.xataka.com/inteligencia-artificial/google-apuesta-reconocimiento-voz-para-ayudar-a-que-personas-sordas-tengan-facil-interactuar-eventos-sociales (accessed on 29 August 2019).
  35. Sacristán, L. Así es StorySign, la Aplicación que Utiliza la IA de Huawei para Enseñar a Leer a Niños Sordos. 2018. Available online: https://www.xatakandroid.com/aplicaciones-android/asi-storysign-aplicacion-que-utiliza-ia-huawei-para-ensenar-a-leer-a-ninos-sordos (accessed on 29 August 2019).
  36. Morrissey, S.; Way, A. Joining hands: Developing a sign language machine translation system with and for the deaf community. In Proceedings of the CVHI-2007—Conference and Workshop on Assistive Technologies for People with Vision and Hearing Impairments: Assistive Technology for All Ages, Granada, Spain, 28–31 August 2007. [Google Scholar]
  37. Morrissey, S. Assistive translation technology for deaf people: Translating into and animating Irish sign language. In Proceedings of the ICCHP 2008—12th International Conference on Computers Helping People with Special Needs, Linz, Austria, 9–11 July 2008. [Google Scholar]
  38. Viera, J.; Hernández, J.; Rodríguez, D.; Castillo, J. Interactive Application in Spanish Sign Language for a Public Transport Environment. In Proceedings of the 11th International Conference on Cognition and Exploratory Learning in Digital Age (CELDA), Porto, Portugal, 25–27 October 2014. [Google Scholar]
  39. Ebling, S.; Huenerfauth, M. Bridging the gap between sign language machine translation and sign language animation using sequence classification. In Proceedings of the SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies, Dresden, Germany, 11 September 2015; pp. 2–9. [Google Scholar]
  40. Carlo, G.; Mazzei, A. Last train to “Rebaudengo Fossano”: The case of some names in avatar translation. In Proceedings of the 6th Workshop on the Representation and Processing of the Sign Languages: Beyond the Manual Channel. Language Resources and Evaluation Conference (LREC 2014), Reykjavik, Iceland, 31 May 2014; pp. 63–66. [Google Scholar]
  41. Geraci, C.; Mazzei, A.; Angster, M. Some issues on Italian to LIS automatic translation: The case of train announcements. In Proceedings of the First Italian Conference on Computational Linguistics CLiC-it 2014 & the Fourth International Workshop (EVALITA 2014), Pisa, Italy, 9–11 December 2014; pp. 191–196. [Google Scholar]
  42. Paire-Ficout, L.; Alauzet, A.; Chevret, M.; Boucheix, J.; Lefebvre-Albaret, F.; Saby, L.; Jobez, P. Innovative visual design to assure information for all in transportation. In Proceedings of the 28th International Congress of Applied Psychology (ICAP 2014), Paris, France, 8–13 July 2014. [Google Scholar]
  43. Paire-Ficout, L.; Alauzet, A.; Boucheix, J.; Saby, L.; Lefebvre-Albaret, F.; Groff, J.; Argon, J.; Jobez, P. How not to give up on train travel when you are deaf? In Proceedings of the TRANSED 2015—14th International Conference on Mobility and Transport for Elderly and Disabled Persons, Lisbon, Portugal, 28–31 July 2015. [Google Scholar]
  44. Motlhabi, M.; Glaser, M.; Tucker, W. SignSupport: A limited communication domain mobile aid for a Deaf patient at the pharmacy. In Proceedings of the Southern African Telecommunication Networks and Applications Conference, Stellenbosch, South Africa, 1–4 September 2013; pp. 173–178. [Google Scholar]
  45. Yang, O.; Morimoto, K.; Kuwahara, N. Evaluation of Chinese Sign Language animation for mammography inspection of hearing-impaired people. In Proceedings of the 2014 IIAI 3rd International Conference on Advanced Applied Informatics, Kita-Kyushu, Japan, 31 August–4 September 2014; pp. 831–836. [Google Scholar]
  46. Süzgün, M.; Özdemir, H.; Camgöz, N.; Kındıroğlu, A.; Başaran, D.; Togay, C.; Akarun, L. Hospisign: An interactive sign language platform for hearing impaired. J. Nav. Sci. Eng. 2015, 11, 75–92. [Google Scholar]
  47. Camgöz, N.; Kındıroğlu, A.; Akarun, L. Sign language recognition for assisting the deaf in hospitals. In Proceedings of the International Workshop on Human Behavior Understanding, Amsterdam, The Netherlands, 16 October 2016; Springer: Cham, Switzerland, 2016; pp. 89–101. [Google Scholar]
  48. Ahmed, F.; Bouillon, P.; Destefano, C.; Gerlach, J.; Halimi, I.; Hooper, A.; Rayner, E.; Spechbach, H.; Strasly, I.; Tsourakis, N. A Robust Medical Speech-to-Speech/Speech-to-Sign Phraselator. In Proceedings of the INTERSPEECH 2017, Stockholm, Sweden, 20–24 August 2017. [Google Scholar]
  49. Koehn, P. Statistical Machine Translation; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2010. [Google Scholar]
  50. Hutchins, J. Multiple uses of machine translation and computerised translation tools. In Proceedings of the International Symposium on Data and Sense Mining, Machine Translation and Controlled Languages (ISMTCL 2009), Besançon, France, 1–3 July 2009; pp. 13–20. [Google Scholar]
  51. Williams, P.; Sennrich, R.; Post, M.; Koehn, P. Syntax-Based Statistical Machine Translation; Morgan & Claypool Publishers: San Rafael, CA, USA, 2016. [Google Scholar]
  52. Abiola, O.; Adetunmbi, A.; Oguntimilehin, A. Review of the Various Approaches to Text to Text Machine Translations. Int. J. Comput. Appl. 2015, 120, 7–12. [Google Scholar]
  53. Song, N.; Yang, H.; Zhi, P. Towards Realizing Sign Language to Emotional Speech Conversion by Deep Learning. In Proceedings of the International Conference of Pioneering Computer Scientists, Engineers and Educators, Zhengzhou, China, 21–23 September 2018; Springer: Singapore, 2018; pp. 416–430. [Google Scholar]
  54. Kajonpong, P. Recognizing American Sign Language Using Deep Learning. Ph.D. Thesis, The University of Texas at San Antonio, San Antonio, TX, USA, 2019. [Google Scholar]
  55. An, X.; Yang, H.; Gan, Z. Towards realizing sign language-to-speech conversion by combining deep learning and statistical parametric speech synthesis. In Proceedings of the International Conference of Pioneering Computer Scientists, Engineers and Educators, Harbin, China, 20–22 August 2016; Springer: Singapore, 2016; pp. 678–690. [Google Scholar]
  56. Song, N.; Yang, H.; Zhi, P. A deep learning based framework for converting sign language to emotional speech. In Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018; pp. 2047–2053. [Google Scholar]
  57. Oramas, J.; Moreno, A.; Chiluiza, K. Technology for Hearing Impaired People: A Novel Use of Xstroke Pointer Gesture Recognition Algorithm for Teaching/Learning Ecuadorian Sign Language. Available online: https://pdfs.semanticscholar.org/a55a/a8a5e3da73dd92ce4b81c55d8ae9618d2fe8.pdf (accessed on 12 May 2019).
  58. Costagliola, G.; Deufemia, V.; Risi, M. Sketch grammars: A formalism for describing and recognizing diagrammatic sketch languages. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR 2005), Seoul, Korea, 31 August–1 September 2005; pp. 1226–1230. [Google Scholar]
  59. Costagliola, G.; Vincenzo, V.; Risi, M. A multi-layer parsing strategy for on-line recognition of hand-drawn diagrams. In Proceedings of the Visual Languages and Human-Centric Computing (VL/HCC’06), Brighton, UK, 4–8 September 2006; pp. 103–110. [Google Scholar]
  60. Valli, C. Linguistics of American Sign Language: An Introduction; Gallaudet University Press: Washington, DC, USA, 2011. [Google Scholar]
  61. Schlenker, P. Sign language and the foundations of anaphora. Annu. Rev. Linguist. 2017, 3, 149–177. [Google Scholar] [CrossRef]
  62. Wienholz, A.; Nuhbalaoglu, D.; Mani, N.; Herrmann, A.; Onea, E.; Steinbach, M. Pointing to the right side? An ERP study on anaphora resolution in German Sign Language. PLoS ONE 2018, 13, e0204223. [Google Scholar] [CrossRef] [PubMed]
  63. Steinbach, M.; Onea, E. A DRT analysis of discourse referents and anaphora resolution in sign language. J. Semant. 2015, 33, 409–448. [Google Scholar] [CrossRef]
  64. Cecchetto, C.; Checchetto, A.; Geraci, C.; Santoro, M.; Zucchi, S. The syntax of predicate ellipsis in Italian Sign Language (LIS). Lingua 2015, 166, 214–235. [Google Scholar] [CrossRef]
  65. Xu, B.S.; Fu, M. Ellipsis of sign language under the deaf culture and its linguistics analysis. Disabil. Res. 2015, 15, 31–34. [Google Scholar]
  66. Zorzi, G. Gapping vs. VP-ellipsis in Catalan sign language. Feast. Form. Exp. Adv. Sign Lang. Theory 2018, 1, 70–81. [Google Scholar]
  67. Costa-jussà, M.; Rapp, R.; Lambert, P.; Eberle, K.; Banchs, R.; Babych, B. Hybrid Approaches to Machine Translation; Springer: Basel, Switzerland, 2016. [Google Scholar]
  68. Agrawal, N.; Singla, A. Using Named Entity Recognition to Improve Machine Translation; Technical Report; Natural Language Processing; Stanford University: Stanford, CA, USA, 2012. [Google Scholar]
  69. Papineni, K.; Roukos, S.; Ward, T.; Zhu, W.J. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, Philadelphia, PA, USA, 7–12 July 2002; pp. 311–318. [Google Scholar]
  70. MacWilliams, F.J.; Sloane, N.J.A. The Theory of Error-Correcting Codes; Elsevier: Amsterdam, The Netherlands, 1977; Volume 16, p. 18. [Google Scholar]
  71. Wohlin, C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering (EASE’14), London, UK, 13–14 May 2014; ACM: London, UK, 2014. [Google Scholar]
  72. PyPI. Scholarly API. Available online: https://pypi.org/project/scholarly/ (accessed on 30 August 2019).
  73. Petticrew, M.; Roberts, H. Systematic Reviews in the Social Sciences: A Practical Guide; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  74. Petersen, K.; Gencel, C. Worldviews, research methods, and their relationship to validity in empirical software engineering research. In Proceedings of the 2013 Joint Conference of the 23rd International Workshop on Software Measurement and the 2013 Eighth International Conference on Software Process and Product Measurement (IWSM-MENSURA), Ankara, Turkey, 23–26 October 2013; pp. 81–89. [Google Scholar]
  75. Naranjo-Zeledón, L.; Peral, J.; Ferrández, A.; Chacón-Rivas, M. Systematic mapping data for translation-enabling technologies for sign languages (Version 1) [Data set]. Zenodo 2019. [Google Scholar] [CrossRef]
  76. Azarbayejani, A.; Wren, C.; Pentland, A. Real-time 3-D tracking of the human body. In Proceedings of the IMAGE’COM, Bordeaux, France, 15 May 1996; pp. 1–6. [Google Scholar]
  77. ACM. The 2012 ACM Computing Classification System. 2012. Available online: https://www.acm.org/publications/class-2012 (accessed on 15 May 2019).
  78. Jemni, M.; Elghoul, O. A system to make signs using collaborative approach. In Proceedings of the International Conference on Computers for Handicapped Persons, Linz, Austria, 9–11 July 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 670–677. [Google Scholar]
  79. Jemni, M.; Elghoul, O.; Makhlouf, S. A web-based tool to create online courses for deaf pupils. In Proceedings of the International Conference on Interactive Mobile and Computer Aided Learning, Amman, Jordan, 17–21 April 2007; pp. 18–20. [Google Scholar]
  80. Jemni, M.; Elghoul, O. Towards Web-Based automatic interpretation of written text to Sign Language. Proc. ICTA 2007, 7, 12–14. [Google Scholar]
  81. El Ghoul, O.; Jemni, M. Multimedia Courses Generator for Deaf Children. Int. Arab J. Inf. Technol. (IAJIT) 2009, 6, 458–464. [Google Scholar]
  82. El Ghoul, O.; Jemni, M. A Multi-layer Model for Sign Language’s Non-Manual Gestures Generation. In Proceedings of the International Conference on Computers for Handicapped Persons, Paris, France, 9–11 July 2014; Springer: Cham, Switzerland, 2014; pp. 466–473. [Google Scholar]
  83. El Ghoul, O.; Jemni, M. WebSign: A system to make and interpret signs using 3D Avatars. In Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Dundee, UK, 23 October 2011. [Google Scholar]
  84. San-Segundo, R.; Barra, R.; Córdoba, R.; D’Haro, L.F.; Fernández, F.; Ferreiros, J.; Pardo, J.M. Speech to sign language translation system for Spanish. Speech Commun. 2008, 50, 1009–1020. [Google Scholar] [CrossRef] [Green Version]
  85. San-Segundo, R.; Montero, J.M.; Córdoba, R.; Sama, V.; Fernández, F.; D’Haro, L.F.; García, A. Design, development and field evaluation of a Spanish into sign language translation system. Pattern Anal. Appl. 2012, 15, 203–224. [Google Scholar] [CrossRef]
  86. San-Segundo, R.; Pardo, J.M.; Ferreiros, J.; Sama, V.; Barra-Chicote, R.; Lucas, J.M.; García, A. Spoken Spanish generation from sign language. Interact. Comput. 2009, 22, 123–139. [Google Scholar] [CrossRef]
  87. López-Ludeña, V.; González-Morcillo, C.; López, J.C.; Barra-Chicote, R.; Córdoba, R.; San-Segundo, R. Translating bus information into sign language for deaf people. Eng. Appl. Artif. Intell. 2014, 32, 258–269. [Google Scholar] [CrossRef] [Green Version]
  88. López-Ludeña, V.; González-Morcillo, C.; López, J.C.; Ferreiro, E.; Ferreiros, J.; San-Segundo, R. Methodology for developing an advanced communications system for the Deaf in a new domain. Knowl.-Based Syst. 2014, 56, 240–252. [Google Scholar]
  89. López-Ludeña, V.; San-Segundo, R.; Montero, J.M.; Córdoba, R.; Ferreiros, J.; Pardo, J.M. Automatic categorization for improving Spanish into Spanish Sign Language machine translation. Comput. Speech Lang. 2012, 26, 149–167. [Google Scholar] [CrossRef] [Green Version]
  90. Lu, P.; Huenerfauth, M. Collecting a motion-capture corpus of American Sign Language for data-driven generation research. In Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies, Los Angeles, CA, USA, 5 June 2010; pp. 89–97. [Google Scholar]
  91. Lu, P.; Huenerfauth, M. Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation. Comput. Speech Lang. 2014, 28, 812–831. [Google Scholar] [CrossRef]
  92. Lu, P.; Huenerfauth, M. Synthesizing American Sign Language spatially inflected verbs from motion-capture data. In Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), in Conjunction with ASSETS, Dundee, UK, 23 October 2011. [Google Scholar]
  93. Braffort, A.; Dalle, P. Sign language applications: Preliminary modeling. Univers. Access Inf. Soc. 2008, 6, 393–404. [Google Scholar] [CrossRef]
  94. Braffort, A. Research on computer science and sign language: Ethical aspects. In Proceedings of the International Gesture Workshop, London, UK, 18–20 April 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 1–8. [Google Scholar]
  95. Braffort, A.; Bolot, L.; Chételat-Pelé, E.; Choisier, A.; Delorme, M.; Filhol, M.; Devos, N. Sign Language Corpora for Analysis, Processing and Evaluation. In Proceedings of the LREC 2010, Valletta, Malta, 17–23 May 2010. [Google Scholar]
  96. Fotinea, S.E.; Efthimiou, E.; Caridakis, G.; Karpouzis, K. A knowledge-based sign synthesis architecture. Univers. Access Inf. Soc. 2008, 6, 405–418. [Google Scholar] [CrossRef]
  97. Fotinea, S.E.; Efthimiou, E.; Kouremenos, D. Generating linguistic content for Greek to GSL conversion. In Proceedings of the 7th Hellenic European Conference on Computer Mathematics and its Applications, Athens, Greece, 22–24 September 2005. [Google Scholar]
  98. Efthimiou, E.; Fontinea, S.E.; Hanke, T.; Glauert, J.; Bowden, R.; Braffort, A.; Goudenove, F. Dicta-sign–sign language recognition, generation and modelling: A research effort with applications in deaf communication. In Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, Valletta, Malta, 17–23 May 2010; pp. 80–83. [Google Scholar]
  99. Efthimiou, E.; Fotinea, S.E. An environment for deaf accessibility to educational content. In Proceedings of the ICTA 2007, Hammamet, Tunisia, 12–14 April 2007. [Google Scholar]
  100. Efthimiou, E.; Fotinea, S.E.; Hanke, T.; Glauert, J.; Bowden, R.; Braffort, A.; Lefebvre-Albaret, F. The dicta-sign wiki: Enabling web communication for the deaf. In Proceedings of the International Conference on Computers for Handicapped Persons, Linz, Austria, 11–13 July 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 205–212. [Google Scholar]
  101. Efthimiou, E.; Fotinea, S.E.; Dimou, A.L.; Goulas, T.; Kouremenos, D. From grammar-based MT to post-processed SL representations. Univers. Access Inf. Soc. 2016, 15, 499–511. [Google Scholar] [CrossRef]
  102. Glauert, J.; Elliott, R. Extending the SiGML Notation—A Progress Report. In Proceedings of the Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Dundee, Scotland, 23 October 2011; Volume 23. [Google Scholar]
  103. Adamo-Villani, N.; Doublestein, J.; Martin, Z. Sign language for K-8 mathematics by 3D interactive animation. J. Educ. Technol. Syst. 2005, 33, 241–257. [Google Scholar] [CrossRef]
  104. Adamo-Villani, N.; Wilbur, R. Two novel technologies for accessible math and science education. IEEE Multimed. 2008, 15, 38–46. [Google Scholar] [CrossRef]
  105. Adamo-Villani, N. 3d rendering of American sign language finger-spelling: A comparative study of two animation techniques. Int. J. Hum. Soc. Sci. 2008, 3, 24. [Google Scholar]
  106. Adamo-Villani, N.; Wilbur, R.; Eccarius, P.; Abe-Harris, L. Effects of character geometric model on perception of sign language animation. In Proceedings of the 2009 Second International Conference in Visualisation, Barcelona, Spain, 15–17 July 2009; pp. 72–75. [Google Scholar]
  107. Adamo-Villani, N.; Hayward, K.; Lestina, J.; Wilbur, R.B. Effective animation of sign language with prosodic elements for annotation of digital educational content. In Proceedings of the SIGGRAPH Talks 2010, Los Angeles, CA, USA, 26–30 July 2010. [Google Scholar]
  108. Huenerfauth, M.; Hanson, V. Sign language in the interface: Access for deaf signers. In Universal Access Handbook; Stephanidis, C., Ed.; CRC Press: Boca Raton, FL, USA, 2009; Volume 38. [Google Scholar]
  109. Huenerfauth, M. A linguistically motivated model for speed and pausing in animations of American sign language. ACM Trans. Access. Comput. (TACCESS) 2009, 2, 9. [Google Scholar] [CrossRef]
  110. Huenerfauth, M.; Lu, P.; Rosenberg, A. Evaluating importance of facial expression in American sign language and pidgin signed English animations. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, Dundee, UK, 24–26 October 2011; pp. 99–106. [Google Scholar]
  111. Huenerfauth, M.; Lu, P. Effect of spatial reference and verb inflection on the usability of sign language animations. Univers. Access Inf. Soc. 2012, 11, 169–184. [Google Scholar] [CrossRef]
  112. Filhol, M.; Hadjadj, M.N.; Choisier, A. Non-manual features: The right to indifference. In Proceedings of the 6th Workshop on the Representation and Processing of Sign Language (LREC), Reykjavik, Iceland, 31 May 2014. [Google Scholar]
  113. Filhol, M.; Hadjadj, M.N.; Testu, B. A rule triggering system for automatic text-to-sign translation. Univers. Access Inf. Soc. 2016, 15, 487–498. [Google Scholar] [CrossRef]
  114. Filhol, M.; Tannier, X. Construction of a French-LSF corpus. In Proceedings of the Building and Using Comparable Corpora Workshop, Language Resource and Evaluation Conference, Reykjavik, Iceland, 27 May 2014; pp. 2–5. [Google Scholar]
  115. Kacorri, H.; Lu, P.; Huenerfauth, M. Evaluating facial expressions in American Sign Language animations for accessible online information. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction, Las Vegas, Nevada, USA, 21–26 July 2013; pp. 510–519. [Google Scholar]
  116. Kacorri, H.; Huenerfauth, M.; Ebling, S.; Patel, K.; Willard, M. Demographic and experiential factors influencing acceptance of sign language animation by deaf users. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, Lisbon, Portugal, 26–28 October 2015; pp. 147–154. [Google Scholar]
  117. Kacorri, H.; Lu, P.; Huenerfauth, M. Effect of displaying human videos during an evaluation study of American Sign Language animation. ACM Trans. Access. Comput. (TACCESS) 2013, 5, 4. [Google Scholar] [CrossRef]
  118. Kacorri, H.; Huenerfauth, M. Implementation and evaluation of animation controls sufficient for conveying ASL facial expressions. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility, Rochester, NY, USA, 20–22 October 2014; pp. 261–262. [Google Scholar]
  119. Escudeiro, N. Virtual Sign Translator in Serious Games. In Proceedings of the InforAbERTA, Jornadas de Informática, Universidade Aberta, Porto, Portugal, 15 March 2014; pp. 1–22. [Google Scholar]
  120. Escudeiro, P.; Escudeiro, N.; Reis, R.; Lopes, J.; Norberto, M.; Baltasar, A.B.; Bidarra, J. Virtual Sign—A Real Time Bidirectional Translator of Portuguese Sign Language. Procedia Comput. Sci. 2015, 67, 252–262. [Google Scholar] [CrossRef]
  121. Escudeiro, P.; Escudeiro, N.; Reis, R.; Barbosa, M.; Bidarra, J.; Baltazar, A.B.; Gouveia, B. Virtual sign translator. In Proceedings of the International Conference on Computer, Networks and Communication Engineering (ICCNCE 2013), Beijing, China, 23–24 May 2013. [Google Scholar]
  122. Escudeiro, P.; Escudeiro, N.; Reis, R.; Barbosa, M.; Bidarra, J.; Baltasar, A.B.; Norberto, M. Virtual sign game learning sign language. In Proceedings of the 5th International Conference on Education and Educational Technologies, Kuala Lumpur, Malaysia, 23–25 April 2014. [Google Scholar]
  123. Stokoe, W. Sign Language structure: An outline of the visual communication systems of the American deaf. Stud. Linguist. Occas. Pap. 1960, 8. [Google Scholar] [CrossRef] [PubMed]
  124. Prillwitz, S.; Leven, R.; Zienert, H.; Hanke, T.; Henning, J. HamNoSys Version 2.0; Hamburg Notation System for Sign Languages. An introductory Guide; International Studies on Sign Language and Communication of the Deaf 5; Signum Press: Hamburg, Germany, 1989. [Google Scholar]
  125. Jemni, M.; Chabeb, Y.; Elghoul, O. Towards improving accessibility of Deaf people to ICT. In Proceedings of the 3rd International Conference on Information Technology, Amman, Jordan, 9–11 May 2007. [Google Scholar]
  126. Jemni, M.; Chabeb, Y.; Elghoul, O. An avatar based approach for automatic interpretation of text to Sign language. In Challenges for Assistive Technology, AAATE 07; IOS Press: Amsterdam, The Netherlands, 2007. [Google Scholar]
  127. Jemni, M.; El Ghoul, O.; Yahia, N.B.; Boulares, M. Sign Language MMS to Make Cell Phones Accessible to the Deaf and Hard-of-hearing Community. In Proceedings of the Conference and Workshop on Assistive Technologies for People with Vision and Hearing Impairments: Assistive Technology for All Ages (CVHI-2007), Granada, Spain, 28–31 August 2007. [Google Scholar]
  128. San-Segundo, R.; Barra, R.; D’Haro, L.F.; Montero, J.M.; Córdoba, R.; Ferreiros, J. A spanish speech to sign language translation system for assisting deaf-mute people. In Proceedings of the Ninth International Conference on Spoken Language Processing, Pittsburgh, PA, USA, 17–21 September 2006. [Google Scholar]
  129. San Segundo, R.; Gallo, B.; Lucas, J.M.; Barra-Chicote, R.; D’Haro, L.F.; Fernandez, F. Speech into sign language statistical translation system for deaf people. IEEE Lat. Am. Trans. 2009, 7, 400–404. [Google Scholar]
  130. López-Ludeña, V.; San-Segundo, R. Statistical Methods for Improving Spanish into Spanish Sign Language Translation. In Proceedings of the 15th Mexican International Conference on Artificial Intelligence, Cancún, Mexico, 23–28 October 2016. [Google Scholar]
  131. López-Ludeña, V.; San-Segundo, R.; Morcillo, C.G.; López, J.C.; Muñoz, J.M.P. Increasing adaptability of a speech into sign language translation system. Expert Syst. Appl. 2013, 40, 1312–1322. [Google Scholar] [CrossRef]
  132. López-Ludeña, V.; San Segundo, R.; González-Morcillo, C.; López, J.C.; Ferreiro, E. Adapting a speech into sign language translation system to a new domain. In Proceedings of the INTERSPEECH 2013, Lyon, France, 25–29 August 2013; pp. 1164–1168. [Google Scholar]
  133. López-Ludeña, V.; San Segundo, R.; Ferreiros, J.; Pardo, J.M.; Ferreiro, E. Developing an information system for deaf. In Proceedings of the INTERSPEECH 2013, Lyon, France, 25–29 August 2013; pp. 3617–3621. [Google Scholar]
  134. Braffort, A.; Boutora, L. Défi d’annotation DEGELS2012: La segmentation (DEGELS2012 annotation challenge: Segmentation. In Proceedings of the JEP-TALN-RECITAL 2012, Workshop DEGELS 2012: Défi GEste Langue des Signes (DEGELS 2012: Gestures and Sign Language Challenge), Grenoble, France, 4–8 June 2012; pp. 1–8. (In French). [Google Scholar]
  135. Kacorri, H. TR-2015001: A Survey and Critique of Facial Expression Synthesis in Sign Language Animation. CUNY Academic Works. 2015. Available online: https://academicworks.cuny.edu/gc_cs_tr/403 (accessed on 20 August 2019).
  136. Kacorri, H.; Huenerfauth, M. Evaluating a dynamic time warping based scoring algorithm for facial expressions in ASL animations. In Proceedings of the SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies, Dresden, Germany, 11 September 2015; pp. 29–35. [Google Scholar]
  137. Naranjo-Zeledón, L.; Peral, J.; Ferrández, A.; Chacón-Rivas, M. Classification-Subclassification Co-Occurrency Frequency Table for Sign Languages Systematic Mapping (Version 1) [Data set]. Zenodo 2019. [Google Scholar] [CrossRef]
  138. Jung, W.S.; Kim, H.S.; Jeon, J.K.; Kim, S.J.; Lee, H.W. Apparatus for Bi-Directional Sign Language/Speech Translation in Real Time and Method. U.S. Patent No. 15/188,099, 2 October 2018. [Google Scholar]
  139. Kanevsky, D.; Pickover, C.A.; Ramabhadran, B.; Rish, I. Language Translation in an Environment Associated with a Virtual Application. U.S. Patent No. 9,542,389, 10 January 2017. [Google Scholar]
  140. Dharmarajan, D. Sign Language Communication with Communication Devices. U.S. Patent No. 9,965,467, 28 September 2017. [Google Scholar]
  141. Opalka, A.; Kellard, W. Systems and Methods for Recognition and Translation of Gestures. U.S. Patent No. 14/686,708, 11 February 2016. [Google Scholar]
  142. Kurzweil, R.C. Use of Avatar with Event Processing. U.S. Patent No. 8,965,771, 24 February 2015. [Google Scholar]
  143. Bokor, B.R.; Smith, A.B.; House, D.E.; Nicol, I.W.B.; Haggar, P.F. Translation of Gesture Responses in a Virtual World. U.S. Patent No. 9,223,399, 29 December 2015. [Google Scholar]
  144. Kacorri, H.; Huenerfauth, M. Continuous profile models in ASL syntactic facial expression synthesis. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016; pp. 2084–2093. [Google Scholar]
  145. Kacorri, H.; Huenerfauth, M. Selecting exemplar recordings of American sign language non-manual expressions for animation synthesis based on manual sign timing. In Proceedings of the 7th Workshop on Speech and Language Processing for Assistive Technologies (INTERSPEECH 2016), San Francisco, CA, USA, 13 September 2016. [Google Scholar]
  146. Kacorri, H.; Syed, A.R.; Huenerfauth, M.; Neidle, C. Centroid-based exemplar selection of ASL non-manual expressions using multidimensional dynamic time warping and mpeg4 features. In Proceedings of the 7th Workshop on the Representation and Processing of the Sign Languages, Language Resources and Evaluation Conference (LREC), Portorož, Slovenia, 23–28 May 2016. [Google Scholar]
  147. Huenerfauth, M.; Lu, P.; Kacorri, H. Synthesizing and Evaluating Animations of American Sign Language Verbs Modeled from Motion-Capture Data. In Proceedings of the SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies, Dresden, Germany, 11 September 2015; pp. 22–28. [Google Scholar]
  148. Huenerfauth, M.; Kacorri, H. Augmenting EMBR virtual human animation system with MPEG-4 controls for producing ASL facial expressions. In Proceedings of the International Symposium on Sign Language Translation and Avatar Technology, Paris, France, 9–10 April 2015; Volume 3. [Google Scholar]
  149. Escudeiro, P.; Escudeiro, N.; Norberto, M.; Lopes, J. Jogos Sérios para Língua Gestual Portuguesa. In Proceedings of the Anais dos Workshops do Congresso Brasileiro de Informática na Educação, Maceió, Brasil, 26–30 October 2015. [Google Scholar]
  150. Escudeiro, P.; Escudeiro, N.; Norberto, M.; Lopes, J. Virtual Sign in serious games. In Proceedings of the International Conference on Serious Games, Interaction, and Simulation, Novedrate, Italy, 16–18 September 2015; Springer: Cham, Switzerland, 2015; pp. 42–49. [Google Scholar]
  151. Escudeiro, P.; Escudeiro, N.; Norberto, M.; Lopes, J. Virtualsign translator as a base for a serious game. In Proceedings of the 3rd International Conference on Technological Ecosystems for Enhancing Multiculturality, Porto, Portugal, 7–9 October 2015; pp. 251–255. [Google Scholar]
  152. Escudeiro, P.; Escudeiro, N.; Norberto, M.; Lopes, J. Virtualsign game evaluation. In Proceedings of the International Conference on Serious Games, Interaction, and Simulation, Porto, Portugal, 16–17 June 2016; Springer: Cham, Switzerland, 2016; pp. 117–124. [Google Scholar]
  153. Lu, P.; Huenerfauth, M. CUNY American Sign Language Motion-Capture Corpus: First Release. In Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, the 8th International Conference on Language Resources and Evaluation, Istanbul, Turkey, 21–27 May 2012. [Google Scholar]
  154. CNLSE. Corpus de la Lengua de Signos Española. Available online: https://www.cnlse.es/es/corpus-de-la-lengua-de-signos-espa%C3%B1ola (accessed on 14 May 2019).
Figure 1. (a) Search results including keyword “mapping”; (b) Search results including keyword “review”.
Figure 1. (a) Search results including keyword “mapping”; (b) Search results including keyword “review”.
Electronics 08 01047 g001
Figure 2. Number of included papers during study selection process.
Figure 2. Number of included papers during study selection process.
Electronics 08 01047 g002
Figure 3. Classification-Years area chart.
Figure 3. Classification-Years area chart.
Electronics 08 01047 g003
Figure 4. (a) Sankey diagram among top 10 authors and publication years [78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122]. (b) Sankey diagram among top author and publication years. (c) Sankey diagram section among top authors and top year.
Figure 4. (a) Sankey diagram among top 10 authors and publication years [78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122]. (b) Sankey diagram among top author and publication years. (c) Sankey diagram section among top authors and top year.
Electronics 08 01047 g004aElectronics 08 01047 g004b
Figure 5. Topics covered (classification-subclassification bubble chart).
Figure 5. Topics covered (classification-subclassification bubble chart).
Electronics 08 01047 g005
Figure 6. Top authors and classification level chart.
Figure 6. Top authors and classification level chart.
Electronics 08 01047 g006
Figure 7. Classifications–Top authors heatmap.
Figure 7. Classifications–Top authors heatmap.
Electronics 08 01047 g007
Figure 8. Research teams–years area chart.
Figure 8. Research teams–years area chart.
Electronics 08 01047 g008
Figure 9. Project/SL and country chart.
Figure 9. Project/SL and country chart.
Electronics 08 01047 g009
Table 1. Data extraction form.
Table 1. Data extraction form.
Data ItemValueRQ
General--
Study IDInteger-
Article TitleName of the article-
Authors NamesSet of names of the authors-
Year of PublicationCalendar yearRQ3
University/Research CenterName of the university/research centerRQ2
VenueName of publication venueRQ3
CountryName of the country (or countries)RQ3
Characterization
Sign Language-ProjectName of the sign language or projectRQ3, RQ5
ClassificationAccording to predefined schemeRQ1, RQ6
Sub-classificationAccording to predefined schemeRQ2, RQ6
AbstractTextRQ4
Table 2. Classification–Subclassification top 3 co-occurrence frequency.
Table 2. Classification–Subclassification top 3 co-occurrence frequency.
ClassificationSubclassificationFrequency
Animation TechniquesAvatar29
-Notation14
-Translation12
Automatic TranslationTranslation182
-Avatar104
-Animation68
AvatarTranslation18
-Animation32
-Notation32
Computational ModelAvatar4
-Animation1
-Notation1
CorpusTranslation20
-Example Based1
-Avatar11
EducationalAvatar43
-Translation24
-Animation24
Example BasedTranslation2
-Animation1
-Corpus1
Gesture or Sign RecognitionTranslation19
-Machine Learning2
-Avatar14
Machine LearningTranslation3
-Notation1
-Recognition1
NotationTranslation3
-Avatar8
-Animation10
ProjectsTranslation2
-Avatar2
-Grammar1
Rule BasedTranslation6
-Avatar2
-Animation4
SL EditorTranslation2
-Avatar5
-Animation3
SL General-Non technicalTranslation2
-Avatar2
-Animation2
SL GrammarTranslation23
-Rule Based1
-Avatar18
Statistical BasedTranslation10
-Avatar1
-Animation1
User validationTranslation6
-Avatar16
-Animation18
Table 3. Most frequent venues.
Table 3. Most frequent venues.
VenueClassTypeCount
Bachelor ThesisThesisBachelor’s Thesis5
Book Chapter or BookNon-refereedBook Section or Book40
Conference PaperPeer-reviewedConference proceedings404
Doctoral ThesisThesisDoctoral dissertation23
Journal ArticlePeer-reviewedJournal Article259
Master–Grade ThesisThesisMaster’s thesis29
Paper–unknown sourceNon-refereed conference proceedingsNon-refereed articles46
PatentPatents and invention disclosuresGranted patent6
PosterPeer-reviewedConference proceedings4
Technical reportPeer-reviewed scientific articlesConference proceedings1
Web Site ProjectUnclassifiedUnclassified2
Workshop PaperPeer-reviewedConference proceedings84
Table 4. Patents granted.
Table 4. Patents granted.
AuthorsReferenceTitleCountryYear
WS Jung, HS Kim, JK Jeon, SJ Kim and HW Lee[138]Apparatus for bi-directional sign language/speech translation in real time and methodUnited States2018
D Kanevsky, CA Pickover and B Ramabhadran[139]Language translation in an environment associated with a virtual applicationUnited States2017
D Dharmarajan[140]Sign language communication with communication devicesUnited States2017
A Opalka and W Kellard[141]Systems and methods for recognition and translation of gesturesUnited States2016
RC Kurzweil[142]Use of avatar with event processingUnited States2015
BR Bokor, AB Smith, DE House, BNII William and PF Haggar[143]Translation of gesture responses in a virtual worldUnited States2015
Table 5. Activities conducted in this research.
Table 5. Activities conducted in this research.
PhaseActionsApplied
Need for mappingMotivate the need and relevance
Define objectives and questions
Consult with target audience to define questions
Study identificationChoosing search strategy-
    Snowballing
    Manual
    Conduct database search
Develop the search-
    PICO
    Consult librarians or experts
    Iteratively try finding more relevant papers
Keywords from known papers
Use standards, encyclopedias, and thesaurus
Evaluate the search
    Test-set of known papers
    Expert evaluates result
    Search web-pages of key authors
    Test–retest
Inclusion and Exclusion-
    Identify objective criteria for decision
    Add additional reviewer, resolve disagreements between them when needed
    Decision rules
Data extraction and classificationExtraction process-
Identify objective criteria for decision
Obscuring information that could bias
Add additional reviewer, resolve disagreements between them when needed
Test–retest
Classification scheme
Research type
Research method
Venue type
Validity discussionValidity discussion/limitations provided

Share and Cite

MDPI and ACS Style

Naranjo-Zeledón, L.; Peral, J.; Ferrández, A.; Chacón-Rivas, M. A Systematic Mapping of Translation-Enabling Technologies for Sign Languages. Electronics 2019, 8, 1047. https://doi.org/10.3390/electronics8091047

AMA Style

Naranjo-Zeledón L, Peral J, Ferrández A, Chacón-Rivas M. A Systematic Mapping of Translation-Enabling Technologies for Sign Languages. Electronics. 2019; 8(9):1047. https://doi.org/10.3390/electronics8091047

Chicago/Turabian Style

Naranjo-Zeledón, Luis, Jesús Peral, Antonio Ferrández, and Mario Chacón-Rivas. 2019. "A Systematic Mapping of Translation-Enabling Technologies for Sign Languages" Electronics 8, no. 9: 1047. https://doi.org/10.3390/electronics8091047

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop