Feature Paper in Informatics

A special issue of Informatics (ISSN 2227-9709).

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 86758

Special Issue Editor


E-Mail Website
Guest Editor
Headingley Campus, Leeds Beckett University, Leeds LS6 3QS, UK
Interests: business process modelling and integration; complexity & chaos theory; formal specification; grounded theory method; information systems development; informatics & information management; knowledge management; methods integration; object orientation; process improvement & capability maturity; qualitative research approaches - particularly grounded theory; research philosophy & methods; software engineering; standards and standardization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Special Issue “Feature Papers in Informatics” aims to publish high-quality articles covering all fields of Informatics. Its scope includes but is not limited to biomedical and health informatics, social informatics, as well as machine learning, data mining and analytics, human computer interaction, and information and communication systems. If your paper is well prepared and approved for further publication, you might be eligible for discounts for your publication.

Prof. Dr. Antony Bryant
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Informatics is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

29 pages, 5718 KiB  
Article
Transferrable Framework Based on Knowledge Graphs for Generating Explainable Results in Domain-Specific, Intelligent Information Retrieval
by Hasan Abu-Rasheed, Christian Weber, Johannes Zenkert, Mareike Dornhöfer and Madjid Fathi
Informatics 2022, 9(1), 6; https://doi.org/10.3390/informatics9010006 - 19 Jan 2022
Cited by 1 | Viewed by 4117
Abstract
In modern industrial systems, collected textual data accumulates over time, offering an important source of information for enhancing present and future industrial practices. Although many AI-based solutions have been developed in the literature for a domain-specific information retrieval (IR) from this data, the [...] Read more.
In modern industrial systems, collected textual data accumulates over time, offering an important source of information for enhancing present and future industrial practices. Although many AI-based solutions have been developed in the literature for a domain-specific information retrieval (IR) from this data, the explainability of these systems was rarely investigated in such domain-specific environments. In addition to considering the domain requirements within an explainable intelligent IR, transferring the explainable IR algorithm to other domains remains an open-ended challenge. This is due to the high costs, which are associated with intensive customization and required knowledge modelling, when developing new explainable solutions for each industrial domain. In this article, we present a transferable framework for generating domain-specific explanations for intelligent IR systems. The aim of our work is to provide a comprehensive approach for constructing explainable IR and recommendation algorithms, which are capable of adopting to domain requirements and are usable in multiple domains at the same time. Our method utilizes knowledge graphs (KG) for modeling the domain knowledge. The KG provides a solid foundation for developing intelligent IR solutions. Utilizing the same KG, we develop graph-based components for generating textual and visual explanations of the retrieved information, taking into account the domain requirements and supporting the transferability to other domain-specific environments, through the structured approach. The use of the KG resulted in minimum-to-zero adjustments when creating explanations for multiple intelligent IR algorithms in multiple domains. We test our method within two different use cases, a semiconductor manufacturing centered use case and a job-to-applicant matching one. Our quantitative results show a high capability of our approach to generate high-level explanations for the end users. In addition, the developed explanation components were highly adaptable to both industrial domains without sacrificing the overall accuracy of the intelligent IR algorithm. Furthermore, a qualitative user-study was conducted. We recorded a high level of acceptance from the users, who reported an enhanced overall experience with the explainable IR system. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

24 pages, 387 KiB  
Article
Searching Deterministic Chaotic Properties in System-Wide Vulnerability Datasets
by Ioannis Tsantilis, Thomas K. Dasaklis, Christos Douligeris and Constantinos Patsakis
Informatics 2021, 8(4), 86; https://doi.org/10.3390/informatics8040086 - 04 Dec 2021
Cited by 1 | Viewed by 2302
Abstract
Cybersecurity is a never-ending battle against attackers, who try to identify and exploit misconfigurations and software vulnerabilities before being patched. In this ongoing conflict, it is important to analyse the properties of the vulnerability time series to understand when information systems are more [...] Read more.
Cybersecurity is a never-ending battle against attackers, who try to identify and exploit misconfigurations and software vulnerabilities before being patched. In this ongoing conflict, it is important to analyse the properties of the vulnerability time series to understand when information systems are more vulnerable. We study computer systems’ software vulnerabilities and probe the relevant National Vulnerability Database (NVD) time-series properties. More specifically, we show through an extensive experimental study based on the National Institute of Standards and Technology (NIST) database that the relevant systems software time series present significant chaotic properties. Moreover, by defining some systems based on open and closed source software, we compare their chaotic properties resulting in statistical conclusions. The contribution of this novel study is focused on the prepossessing stage of vulnerabilities time series forecasting. The strong evidence of their chaotic properties as derived by this research effort could lead to a deeper analysis to provide additional tools to their forecasting process. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

12 pages, 255 KiB  
Article
Literature Review of Deep Network Compression
by Ali Alqahtani, Xianghua Xie and Mark W. Jones
Informatics 2021, 8(4), 77; https://doi.org/10.3390/informatics8040077 - 17 Nov 2021
Cited by 20 | Viewed by 4398
Abstract
Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property. This presents significant challenges and restricts many deep learning applications, making the focus on reducing the complexity of models while maintaining their powerful [...] Read more.
Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property. This presents significant challenges and restricts many deep learning applications, making the focus on reducing the complexity of models while maintaining their powerful performance. In this paper, we present an overview of popular methods and review recent works on compressing and accelerating deep neural networks. We consider not only pruning methods but also quantization methods, and low-rank factorization methods. This review also intends to clarify these major concepts, and highlights their characteristics, advantages, and shortcomings. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
13 pages, 856 KiB  
Article
Arabic Offensive and Hate Speech Detection Using a Cross-Corpora Multi-Task Learning Model
by Wassen Aldjanabi, Abdelghani Dahou, Mohammed A. A. Al-qaness, Mohamed Abd Elaziz, Ahmed Mohamed Helmi and Robertas Damaševičius
Informatics 2021, 8(4), 69; https://doi.org/10.3390/informatics8040069 - 08 Oct 2021
Cited by 42 | Viewed by 5145
Abstract
As social media platforms offer a medium for opinion expression, social phenomena such as hatred, offensive language, racism, and all forms of verbal violence have increased spectacularly. These behaviors do not affect specific countries, groups, or communities only, extending beyond these areas into [...] Read more.
As social media platforms offer a medium for opinion expression, social phenomena such as hatred, offensive language, racism, and all forms of verbal violence have increased spectacularly. These behaviors do not affect specific countries, groups, or communities only, extending beyond these areas into people’s everyday lives. This study investigates offensive and hate speech on Arab social media to build an accurate offensive and hate speech detection system. More precisely, we develop a classification system for determining offensive and hate speech using a multi-task learning (MTL) model built on top of a pre-trained Arabic language model. We train the MTL model on the same task using cross-corpora representing a variation in the offensive and hate context to learn global and dataset-specific contextual representations. The developed MTL model showed a significant performance and outperformed existing models in the literature on three out of four datasets for Arabic offensive and hate speech detection tasks. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

33 pages, 646 KiB  
Article
Convolutional Extreme Learning Machines: A Systematic Review
by Iago Richard Rodrigues, Sebastião Rogério da Silva Neto, Judith Kelner, Djamel Sadok and Patricia Takako Endo
Informatics 2021, 8(2), 33; https://doi.org/10.3390/informatics8020033 - 13 May 2021
Cited by 15 | Viewed by 4197
Abstract
Much work has recently identified the need to combine deep learning with extreme learning in order to strike a performance balance with accuracy, especially in the domain of multimedia applications. When considering this new paradigm—namely, the convolutional extreme learning machine (CELM)—we present a [...] Read more.
Much work has recently identified the need to combine deep learning with extreme learning in order to strike a performance balance with accuracy, especially in the domain of multimedia applications. When considering this new paradigm—namely, the convolutional extreme learning machine (CELM)—we present a systematic review that investigates alternative deep learning architectures that use the extreme learning machine (ELM) for faster training to solve problems that are based on image analysis. We detail each of the architectures that are found in the literature along with their application scenarios, benchmark datasets, main results, and advantages, and then present the open challenges for CELM. We followed a well-structured methodology and established relevant research questions that guided our findings. Based on 81 primary studies, we found that object recognition is the most common problem that is solved by CELM, and CCN with predefined kernels is the most common CELM architecture proposed in the literature. The results from experiments show that CELM models present good precision, convergence, and computational performance, and they are able to decrease the total processing time that is required by the learning process. The results presented in this systematic review are expected to contribute to the research area of CELM, providing a good starting point for dealing with some of the current problems in the analysis of computer vision based on images. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

17 pages, 1216 KiB  
Article
Feasibility Study on the Role of Personality, Emotion, and Engagement in Socially Assistive Robotics: A Cognitive Assessment Scenario
by Alessandra Sorrentino, Gianmaria Mancioppi, Luigi Coviello, Filippo Cavallo and Laura Fiorini
Informatics 2021, 8(2), 23; https://doi.org/10.3390/informatics8020023 - 26 Mar 2021
Cited by 6 | Viewed by 3277
Abstract
This study aims to investigate the role of several aspects that may influence human–robot interaction in assistive scenarios. Among all, we focused on semi-permanent qualities (i.e., personality and cognitive state) and temporal traits (i.e., emotion and engagement) of the user profile. To this [...] Read more.
This study aims to investigate the role of several aspects that may influence human–robot interaction in assistive scenarios. Among all, we focused on semi-permanent qualities (i.e., personality and cognitive state) and temporal traits (i.e., emotion and engagement) of the user profile. To this end, we organized an experimental session with 11 elderly users who performed a cognitive assessment with the non-humanoid ASTRO robot. ASTRO robot administered the Mini Mental State Examination test in Wizard of Oz setup. Temporal and long-term qualities of each user profile were assessed by self-report questionnaires and by behavioral features extrapolated by the recorded videos. Results highlighted that the quality of the interaction did not depend on the cognitive state of the participants. On the contrary, the cognitive assessment with the robot significantly reduced the anxiety of the users, by enhancing the trust in the robotic entity. It suggests that the personality and the affect traits of the interacting user have a fundamental influence on the quality of the interaction, also in the socially assistive context. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

14 pages, 608 KiB  
Article
Exploring the Interdependence Theory of Complementarity with Case Studies. Autonomous Human–Machine Teams (A-HMTs)
by William F. Lawless
Informatics 2021, 8(1), 14; https://doi.org/10.3390/informatics8010014 - 26 Feb 2021
Cited by 3 | Viewed by 2580
Abstract
Rational models of human behavior aim to predict, possibly control, humans. There are two primary models, the cognitive model that treats behavior as implicit, and the behavioral model that treats beliefs as implicit. The cognitive model reigned supreme until reproducibility issues arose, including [...] Read more.
Rational models of human behavior aim to predict, possibly control, humans. There are two primary models, the cognitive model that treats behavior as implicit, and the behavioral model that treats beliefs as implicit. The cognitive model reigned supreme until reproducibility issues arose, including Axelrod’s prediction that cooperation produces the best outcomes for societies. In contrast, by dismissing the value of beliefs, predictions of behavior improved dramatically, but only in situations where beliefs were suppressed, unimportant, or in low risk, highly certain environments, e.g., enforced cooperation. Moreover, rational models lack supporting evidence for their mathematical predictions, impeding generalizations to artificial intelligence (AI). Moreover, rational models cannot scale to teams or systems, which is another flaw. However, the rational models fail in the presence of uncertainty or conflict, their fatal flaw. These shortcomings leave rational models ill-prepared to assist the technical revolution posed by autonomous human–machine teams (A-HMTs) or autonomous systems. For A-HMT teams, we have developed the interdependence theory of complementarity, largely overlooked because of the bewilderment interdependence causes in the laboratory. Where the rational model fails in the face of uncertainty or conflict, interdependence theory thrives. The best human science teams are fully interdependent; intelligence has been located in the interdependent interactions of teammates, and interdependence is quantum-like. We have reported in the past that, facing uncertainty, human debate exploits the interdependent bistable views of reality in tradeoffs seeking the best path forward. Explaining uncertain contexts, which no single agent can determine alone, necessitates that members of A-HMTs express their actions in causal terms, however imperfectly. Our purpose in this paper is to review our two newest discoveries here, both of which generalize and scale, first, following new theory to separate entropy production from structure and performance, and second, discovering that the informatics of vulnerability generated during competition propels evolution, invisible to the theories and practices of cooperation. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

29 pages, 3516 KiB  
Article
Deep Learning for Enterprise Systems Implementation Lifecycle Challenges: Research Directions
by Hossam El-Din Hassanien and Ahmed Elragal
Informatics 2021, 8(1), 11; https://doi.org/10.3390/informatics8010011 - 20 Feb 2021
Cited by 3 | Viewed by 3263
Abstract
Transforming the state-of-the-art definition and anatomy of enterprise systems (ESs) seems to some academics and practitioners as an unavoidable destiny. Value depletion lead by early retirement and/or replacement of ESs solutions has been a constant throughout the past decade. That did drive an [...] Read more.
Transforming the state-of-the-art definition and anatomy of enterprise systems (ESs) seems to some academics and practitioners as an unavoidable destiny. Value depletion lead by early retirement and/or replacement of ESs solutions has been a constant throughout the past decade. That did drive an enormous amount of research that works on addressing the problems leading to the resource drain. The resource waste had persisted throughout the ESs implementation lifecycle phases and dimensions especially post-live phases; leading to depleting the value of the social and technical dimensions of the lifecycle. Parallel to this research stream, the momentum gained by deep learning (DL) algorithms and platforms has been exponentially growing to fuel the advancements toward artificial intelligence and automated augmentation. Correspondingly, this paper is set out to present five key research directions through which DL would take part as a contributor towards the transformation of the ESs state-of-the-art. The paper reviews the ESs implementation lifecycle challenges and the intersection with DL research conducted on ESs by analyzing and synthesizing key basket journals (list of the Association of Information Systems). The paper also presents results from several experiments showcasing the effectiveness of DL in adding a level of augmentation to ESs by analyzing a large set of data extracted from the Atlassian Jira Software Issue Tracking System across different ecosystems. The paper then concludes by presenting the research directions and discussing socio-technical research courses that work on key frontiers identified within this scholarly work. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

27 pages, 749 KiB  
Article
Towards a Better Integration of Fuzzy Matches in Neural Machine Translation through Data Augmentation
by Arda Tezcan, Bram Bulté and Bram Vanroy
Informatics 2021, 8(1), 7; https://doi.org/10.3390/informatics8010007 - 29 Jan 2021
Cited by 10 | Viewed by 3640
Abstract
We identify a number of aspects that can boost the performance of Neural Fuzzy Repair (NFR), an easy-to-implement method to integrate translation memory matches and neural machine translation (NMT). We explore various ways of maximising the added value of retrieved matches within the [...] Read more.
We identify a number of aspects that can boost the performance of Neural Fuzzy Repair (NFR), an easy-to-implement method to integrate translation memory matches and neural machine translation (NMT). We explore various ways of maximising the added value of retrieved matches within the NFR paradigm for eight language combinations, using Transformer NMT systems. In particular, we test the impact of different fuzzy matching techniques, sub-word-level segmentation methods and alignment-based features on overall translation quality. Furthermore, we propose a fuzzy match combination technique that aims to maximise the coverage of source words. This is supplemented with an analysis of how translation quality is affected by input sentence length and fuzzy match score. The results show that applying a combination of the tested modifications leads to a significant increase in estimated translation quality over all baselines for all language combinations. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

23 pages, 1900 KiB  
Article
An Affective and Cognitive Toy to Support Mood Disorders
by Esperanza Johnson, Iván González, Tania Mondéjar, Luis Cabañero-Gómez, Jesús Fontecha and Ramón Hervás
Informatics 2020, 7(4), 48; https://doi.org/10.3390/informatics7040048 - 31 Oct 2020
Cited by 3 | Viewed by 3637
Abstract
Affective computing is a branch of artificial intelligence that aims at processing and interpreting emotions. In this study, we implemented sensors/actuators into a stuffed toy mammoth, which allows the toy to have an affective and cognitive basis to its communication. The goal is [...] Read more.
Affective computing is a branch of artificial intelligence that aims at processing and interpreting emotions. In this study, we implemented sensors/actuators into a stuffed toy mammoth, which allows the toy to have an affective and cognitive basis to its communication. The goal is for therapists to use this as a tool during their therapy sessions that work with patients with mood disorders. The toy detects emotion and provides a dialogue that would guide a session aimed at working with emotional regulation and perception. These technical capabilities are possible by employing IBM Watson’s services, implemented into a Raspberry Pi Zero. In this paper, we delve into its evaluation with neurotypical adolescents, a panel of experts, and other professionals. The evaluation aims were to perform a technical and application validation for use in therapy sessions. The results of the evaluations are generally positive, with an 87% accuracy for emotion recognition, and an average usability score of 77.5 for experts (n = 5), and 64.35 for professionals (n = 23). We add to that information some of the issues encountered, its effects on applicability, and future work to be done. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

13 pages, 423 KiB  
Article
Mapping International Civic Technologies Platforms
by Aelita Skaržauskienė and Monika Mačiulienė
Informatics 2020, 7(4), 46; https://doi.org/10.3390/informatics7040046 - 21 Oct 2020
Cited by 7 | Viewed by 2781
Abstract
The new communication paradigm supported by Information and Communication Technology (ICT) puts end-users at the center of innovation processes, thereby shifting the emphasis from technology to people. Citizen centric approaches such as New Public Governance and Open Government in the public management research [...] Read more.
The new communication paradigm supported by Information and Communication Technology (ICT) puts end-users at the center of innovation processes, thereby shifting the emphasis from technology to people. Citizen centric approaches such as New Public Governance and Open Government in the public management research suggest that government alone cannot be responsible for creating public value. Traditional approaches to public engagement and governmental reforms remain relevant, however our research is more interested in the ability of a networked society to resolve social problems for itself, i.e., without government intervention. In seeking to gain insights into bottom up co-creation processes, this paper aims to collect and generalize information on the international civic technology platforms by focusing on three dimensions: identification of the objectives (content), classification of main stakeholder groups (actors), and definition of co-creative methods (processes). In view of a paucity of research on Civic Technologies, the content analysis will extend the understanding of this growing field and allow us to identify the patterns in their development. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

27 pages, 3773 KiB  
Article
Review of Kalah Game Research and the Proposition of a Novel Heuristic–Deterministic Algorithm Compared to Tree-Search Solutions and Human Decision-Making
by Libor Pekař, Radek Matušů, Jiří Andrla and Martina Litschmannová
Informatics 2020, 7(3), 34; https://doi.org/10.3390/informatics7030034 - 14 Sep 2020
Viewed by 5384
Abstract
The Kalah game represents the most popular version of probably the oldest board game ever—the Mancala game. From this viewpoint, the art of playing Kalah can contribute to cultural heritage. This paper primarily focuses on a review of Kalah history and on a [...] Read more.
The Kalah game represents the most popular version of probably the oldest board game ever—the Mancala game. From this viewpoint, the art of playing Kalah can contribute to cultural heritage. This paper primarily focuses on a review of Kalah history and on a survey of research made so far for solving and analyzing the Kalah game (and some other related Mancala games). This review concludes that even if strong in-depth tree-search solutions for some types of the game were already published, it is still reasonable to develop less time-consumptive and computationally-demanding playing algorithms and their strategies Therefore, the paper also presents an original heuristic algorithm based on particular deterministic strategies arising from the analysis of the game rules. Standard and modified mini–max tree-search algorithms are introduced as well. A simple C++ application with Qt framework is developed to perform the algorithm verification and comparative experiments. Two sets of benchmark tests are made; namely, a tournament where a mid–experienced amateur human player competes with the three algorithms is introduced first. Then, a round-robin tournament of all the algorithms is presented. It can be deduced that the proposed heuristic algorithm has comparable success to the human player and to low-depth tree-search solutions. Moreover, multiple-case experiments proved that the opening move has a decisive impact on winning or losing. Namely, if the computer plays first, the human opponent cannot beat it. Contrariwise, if it starts to play second, using the heuristic algorithm, it nearly always loses. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

37 pages, 6332 KiB  
Article
Modelling User Preference for Embodied Artificial Intelligence and Appearance in Realistic Humanoid Robots
by Carl Strathearn and Minhua Ma
Informatics 2020, 7(3), 28; https://doi.org/10.3390/informatics7030028 - 31 Jul 2020
Cited by 4 | Viewed by 5476
Abstract
Realistic humanoid robots (RHRs) with embodied artificial intelligence (EAI) have numerous applications in society as the human face is the most natural interface for communication and the human body the most effective form for traversing the manmade areas of the planet. Thus, developing [...] Read more.
Realistic humanoid robots (RHRs) with embodied artificial intelligence (EAI) have numerous applications in society as the human face is the most natural interface for communication and the human body the most effective form for traversing the manmade areas of the planet. Thus, developing RHRs with high degrees of human-likeness provides a life-like vessel for humans to physically and naturally interact with technology in a manner insurmountable to any other form of non-biological human emulation. This study outlines a human–robot interaction (HRI) experiment employing two automated RHRs with a contrasting appearance and personality. The selective sample group employed in this study is composed of 20 individuals, categorised by age and gender for a diverse statistical analysis. Galvanic skin response, facial expression analysis, and AI analytics permitted cross-analysis of biometric and AI data with participant testimonies to reify the results. This study concludes that younger test subjects preferred HRI with a younger-looking RHR and the more senior age group with an older looking RHR. Moreover, the female test group preferred HRI with an RHR with a younger appearance and male subjects with an older looking RHR. This research is useful for modelling the appearance and personality of RHRs with EAI for specific jobs such as care for the elderly and social companions for the young, isolated, and vulnerable. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

21 pages, 528 KiB  
Article
Rapid Development of Competitive Translation Engines for Access to Multilingual COVID-19 Information
by Andy Way, Rejwanul Haque, Guodong Xie, Federico Gaspari, Maja Popović and Alberto Poncelas
Informatics 2020, 7(2), 19; https://doi.org/10.3390/informatics7020019 - 17 Jun 2020
Cited by 10 | Viewed by 4981
Abstract
Every day, more people are becoming infected and dying from exposure to COVID-19. Some countries in Europe like Spain, France, the UK and Italy have suffered particularly badly from the virus. Others such as Germany appear to have coped extremely well. Both health [...] Read more.
Every day, more people are becoming infected and dying from exposure to COVID-19. Some countries in Europe like Spain, France, the UK and Italy have suffered particularly badly from the virus. Others such as Germany appear to have coped extremely well. Both health professionals and the general public are keen to receive up-to-date information on the effects of the virus, as well as treatments that have proven to be effective. In cases where language is a barrier to access of pertinent information, machine translation (MT) may help people assimilate information published in different languages. Our MT systems trained on COVID-19 data are freely available for anyone to use to help translate information (such as promoting good practice for symptom identification, prevention, and treatment) published in German, French, Italian, Spanish into English, as well as the reverse direction. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

19 pages, 611 KiB  
Article
Quantifying the Effect of Machine Translation in a High-Quality Human Translation Production Process
by Lieve Macken, Daniel Prou and Arda Tezcan
Informatics 2020, 7(2), 12; https://doi.org/10.3390/informatics7020012 - 23 Apr 2020
Cited by 20 | Viewed by 11762
Abstract
This paper studies the impact of machine translation (MT) on the translation workflow at the Directorate-General for Translation (DGT), focusing on two language pairs and two MT paradigms: English-into-French with statistical MT and English-into-Finnish with neural MT. We collected data from 20 professional [...] Read more.
This paper studies the impact of machine translation (MT) on the translation workflow at the Directorate-General for Translation (DGT), focusing on two language pairs and two MT paradigms: English-into-French with statistical MT and English-into-Finnish with neural MT. We collected data from 20 professional translators at DGT while they carried out real translation tasks in normal working conditions. The participants enabled/disabled MT for half of the segments in each document. They filled in a survey at the end of the logging period. We measured the productivity gains (or losses) resulting from the use of MT and examined the relationship between technical effort and temporal effort. The results show that while the usage of MT leads to productivity gains on average, this is not the case for all translators. Moreover, the two technical effort indicators used in this study show weak correlations with post-editing time. The translators’ perception of their speed gains was more or less in line with the actual results. Reduction of typing effort is the most frequently mentioned reason why participants preferred working with MT, but also the psychological benefits of not having to start from scratch were often mentioned. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

15 pages, 754 KiB  
Article
A Snapshot of Bystander Attitudes about Mobile Live-Streaming Video in Public Settings
by Cori Faklaris, Francesco Cafaro, Asa Blevins, Matthew A. O’Haver and Neha Singhal
Informatics 2020, 7(2), 10; https://doi.org/10.3390/informatics7020010 - 27 Mar 2020
Cited by 8 | Viewed by 6096
Abstract
With the advent of mobile apps such as Periscope, Facebook Live, and now TikTok, live-streaming video has become a commonplace form of social computing. It has not been clear, however, to what extent the current ubiquity of smartphones is impacting this technology’s acceptance [...] Read more.
With the advent of mobile apps such as Periscope, Facebook Live, and now TikTok, live-streaming video has become a commonplace form of social computing. It has not been clear, however, to what extent the current ubiquity of smartphones is impacting this technology’s acceptance in everyday social situations, and how mobile contexts or affordances will affect and be affected by shifts in social norms and policy debates regarding privacy, surveillance, and intellectual property. This ethnographic-style research provides a snapshot of attitudes about the technology among a sample of US participants in two public contexts, both held outdoors in August 2016: A sports tailgating event and a meeting event. Interviews with n = 20 bystanders revealed that many are not fully aware of when their image or speech is being live-streamed in a casual context, and some want stronger notifications of and ability to consent to such broadcasting. We offer design recommendations to help bridge this socio-technical gap. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

Review

Jump to: Research

35 pages, 41376 KiB  
Review
Modern Scientific Visualizations on the Web
by Loraine Franke and Daniel Haehn
Informatics 2020, 7(4), 37; https://doi.org/10.3390/informatics7040037 - 24 Sep 2020
Cited by 10 | Viewed by 7821
Abstract
Modern scientific visualization is web-based and uses emerging technology such as WebGL (Web Graphics Library) and WebGPU for three-dimensional computer graphics and WebXR for augmented and virtual reality devices. These technologies, paired with the accessibility of websites, potentially offer a user experience beyond [...] Read more.
Modern scientific visualization is web-based and uses emerging technology such as WebGL (Web Graphics Library) and WebGPU for three-dimensional computer graphics and WebXR for augmented and virtual reality devices. These technologies, paired with the accessibility of websites, potentially offer a user experience beyond traditional standalone visualization systems. We review the state-of-the-art of web-based scientific visualization and present an overview of existing methods categorized by application domain. As part of this analysis, we introduce the Scientific Visualization Future Readiness Score (SciVis FRS) to rank visualizations for a technology-driven disruptive tomorrow. We then summarize challenges, current state of the publication trend, future directions, and opportunities for this exciting research field. Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Show Figures

Figure 1

25 pages, 349 KiB  
Review
What the Web Has Wrought
by Antony Bryant
Informatics 2020, 7(2), 15; https://doi.org/10.3390/informatics7020015 - 19 May 2020
Cited by 1 | Viewed by 4565
Abstract
In 1989, Sir Tim Berners-Lee proposed the development of ‘a large hypertext database with typed links’, which eventually became The World Wide Web. It was rightly heralded at the time as a significant development and a boon for one-and-all as the digital [...] Read more.
In 1989, Sir Tim Berners-Lee proposed the development of ‘a large hypertext database with typed links’, which eventually became The World Wide Web. It was rightly heralded at the time as a significant development and a boon for one-and-all as the digital age flourished both in terms of universal accessibility and affordability. The general anticipation was that this could herald an era of universal friendship and knowledge-sharing, ushering in global cooperation and mutual regard. In November 2019, marking 30 years of the Web, Berners-Lee lamented that its initial promise was being largely undermined, and that we were in danger of heading towards a ‘digital dystopia’: What happened? Full article
(This article belongs to the Special Issue Feature Paper in Informatics)
Back to TopTop