Special Issue "Special Issues on Languages Processing"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 October 2017)

Special Issue Editors

Guest Editor
Prof. Ricardo Alexandre Peixoto de Queirós

ESMAD, Polytechnic Institute of Oporto, Portugal; Information Systems and Technologies, Center for Research in Advanced Computing Systems (CRACS), INESC TEC, Portugal
Website | E-Mail
Interests: computer science education; systems architecture; web data and services; languages processing
Guest Editor
Prof. Mário Paulo Teixeira Pinto

ESMAD, Polytechnic Institute of Oporto, Portugal; Information Systems and Technologies; International Society for Professional Innovation Management (ISPIM); Media Arts and Design Research Unit (UNIMAD), Portugal
Website | E-Mail
Interests: computer science education; information and knowledge management systems; multimedia educational resources for learning
Guest Editor
Prof. Carlos Filipe da Silva Portela

Information Systems and Technologies, ALGORITMI Research Centre, University of Minho; ESMAD, Polytechnic Institute of Oporto, Portugal
Website | E-Mail
Interests: data science; pervasive information system; artificial intelligence; decision support systems; data mining; business intelligence; biomedical informatics

Special Issue Information

Dear Colleagues,

We often use languages. First, to communicate between ourselves. Later, to communicate with computers. In addition, more recently, with the advent of networks, we found a way to make computers communicate among themselves. All these different forms of communication use languages, different languages, but languages that still share many similarities. In this Special Issue, we will publish an extended version of best papers selected from Symposium on Languages, Applications, and Technologies (SLATE'17).

In this Special Issue, the three types of processing languages are addressed: Human–Human (HHL), Human–Computer (HCL) and Computer–Computer Languages (CCL).

Prof. Ricardo Queirós
Prof. Mário Pinto
Prof. Filipe Portela
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle A Survey on Portuguese Lexical Knowledge Bases: Contents, Comparison and Combination
Information 2018, 9(2), 34; doi:10.3390/info9020034
Received: 28 December 2017 / Revised: 25 January 2018 / Accepted: 31 January 2018 / Published: 2 February 2018
PDF Full-text (270 KB) | HTML Full-text | XML Full-text
Abstract
In the last decade, several lexical-semantic knowledge bases (LKBs) were developed for Portuguese, by different teams and following different approaches. Most of them are open and freely available for the community. Those LKBs are briefly analysed here, with a focus on size, structure,
[...] Read more.
In the last decade, several lexical-semantic knowledge bases (LKBs) were developed for Portuguese, by different teams and following different approaches. Most of them are open and freely available for the community. Those LKBs are briefly analysed here, with a focus on size, structure, and overlapping contents. However, we go further and exploit all of the analysed LKBs in the creation of new LKBs, based on the redundant contents. Both original and redundancy-based LKBs are then compared, indirectly, based on the performance of automatic procedures that exploit them for solving four different semantic analysis tasks. In addition to conclusions on the performance of the original LKBs, results show that, instead of selecting a single LKB to use, it is generally worth combining the contents of all the open Portuguese LKBs, towards better results. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Open AccessArticle CSS Preprocessing: Tools and Automation Techniques
Information 2018, 9(1), 17; doi:10.3390/info9010017
Received: 12 November 2017 / Revised: 5 January 2018 / Accepted: 10 January 2018 / Published: 12 January 2018
PDF Full-text (299 KB) | HTML Full-text | XML Full-text
Abstract
Cascading Style Sheets (CSS) is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development
[...] Read more.
Cascading Style Sheets (CSS) is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development has changed dramatically with the appearance of several languages and tools aiming to help developers build clean, modular and performance-aware CSS. These new approaches give developers mechanisms to preprocess CSS rules through the use of programming constructs, defined as CSS preprocessors, with the ultimate goal to bring those missing constructs to the CSS realm and to foster stylesheets structured programming. At the same time, a new set of tools appeared, defined as postprocessors, for extension and automation purposes covering a broad set of features ranging from identifying unused and duplicate code to applying vendor prefixes. With all these tools and techniques in hands, developers need to provide a consistent workflow to foster CSS modular coding. This paper aims to present an introductory survey on the CSS processors. The survey gathers information on a specific set of processors, categorizes them and compares their features regarding a set of predefined criteria such as: maturity, coverage and performance. Finally, we propose a basic set of best practices in order to setup a simple and pragmatic styling code workflow. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle Automata Approach to XML Data Indexing
Information 2018, 9(1), 12; doi:10.3390/info9010012
Received: 1 December 2017 / Revised: 29 December 2017 / Accepted: 3 January 2018 / Published: 6 January 2018
PDF Full-text (1534 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The internal structure of XML documents can be viewed as a tree. Trees are among the fundamental and well-studied data structures in computer science. They express a hierarchical structure and are widely used in many applications. This paper focuses on the problem of
[...] Read more.
The internal structure of XML documents can be viewed as a tree. Trees are among the fundamental and well-studied data structures in computer science. They express a hierarchical structure and are widely used in many applications. This paper focuses on the problem of processing tree data structures; particularly, it studies the XML index problem. Although there exist many state-of-the-art methods, the XML index problem still belongs to the active research areas. However, existing methods usually lack clear references to a systematic approach to the standard theory of formal languages and automata. Therefore, we present some new methods solving the XML index problem using the automata theory. These methods are simple and allow one to efficiently process a small subset of XPath. Thus, having an XML data structure, our methods can be used efficiently as auxiliary data structures that enable answering a particular set of queries, e.g., XPath queries using any combination of the child and descendant-or-self axes. Given an XML tree model with n nodes, the searching phase uses the index, reads an input query of size m, finds the answer in time O ( m ) and does not depend on the size of the original XML document. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle EmoSpell, a Morphological and Emotional Word Analyzer
Information 2018, 9(1), 1; doi:10.3390/info9010001
Received: 29 September 2017 / Revised: 26 November 2017 / Accepted: 7 December 2017 / Published: 3 January 2018
PDF Full-text (1149 KB) | HTML Full-text | XML Full-text
Abstract
The analysis of sentiments, emotions, and opinions in texts is increasingly important in the current digital world. The existing lexicons with emotional annotations for the Portuguese language are oriented to polarities, classifying words as positive, negative, or neutral. To identify the emotional load
[...] Read more.
The analysis of sentiments, emotions, and opinions in texts is increasingly important in the current digital world. The existing lexicons with emotional annotations for the Portuguese language are oriented to polarities, classifying words as positive, negative, or neutral. To identify the emotional load intended by the author, it is necessary to also categorize the emotions expressed by individual words. EmoSpell is an extension of a morphological analyzer with semantic annotations of the emotional value of words. It uses Jspell as the morphological analyzer and a new dictionary with emotional annotations. This dictionary incorporates the lexical base EMOTAIX.PT, which classifies words based on three different levels of emotions—global, specific, and intermediate. This paper describes the generation of the EmoSpell dictionary using three sources: the Jspell Portuguese dictionary and the lexical bases EMOTAIX.PT and SentiLex-PT. Additionally, this paper details the Web application and Web service that exploit this dictionary. It also presents a validation of the proposed approach using a corpus of student texts with different emotional loads. The validation compares the analyses provided by EmoSpell with the mentioned emotional lexical bases on the ability to recognize emotional words and extract the dominant emotion from a text. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle Source Code Documentation Generation Using Program Execution
Information 2017, 8(4), 148; doi:10.3390/info8040148
Received: 30 September 2017 / Revised: 13 November 2017 / Accepted: 14 November 2017 / Published: 17 November 2017
PDF Full-text (291 KB) | HTML Full-text | XML Full-text
Abstract
Automated source code documentation approaches often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we describe DynamiDoc: a simple automated documentation generator based on dynamic analysis. Our representation-based approach
[...] Read more.
Automated source code documentation approaches often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we describe DynamiDoc: a simple automated documentation generator based on dynamic analysis. Our representation-based approach traces the program being executed and records string representations of concrete argument values, a return value and a target object state before and after each method execution. Then, for each method, it generates documentation sentences with examples, such as “When called on [3, 1.2] with element = 3, the object changed to [1.2]”. Advantages and shortcomings of the approach are listed. We also found out that the generated sentences are substantially shorter than the methods they describe. According to our small-scale study, the majority of objects in the generated documentation have their string representations overridden, which further confirms the potential usefulness of our approach. Finally, we propose an alternative, variable-based approach that describes the values of individual member variables, rather than the state of an object as a whole. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle On the Implementation of a Cloud-Based Computing Test Bench Environment for Prolog Systems
Information 2017, 8(4), 129; doi:10.3390/info8040129
Received: 13 September 2017 / Revised: 10 October 2017 / Accepted: 13 October 2017 / Published: 19 October 2017
PDF Full-text (518 KB) | HTML Full-text | XML Full-text
Abstract
Software testing and benchmarking are key components of the software development process. Nowadays, a good practice in large software projects is the continuous integration (CI) software development technique. The key idea of CI is to let developers integrate their work as they produce
[...] Read more.
Software testing and benchmarking are key components of the software development process. Nowadays, a good practice in large software projects is the continuous integration (CI) software development technique. The key idea of CI is to let developers integrate their work as they produce it, instead of performing the integration at the end of each software module. In this paper, we extend a previous work on a benchmark suite for the YAP Prolog system, and we propose a fully automated test bench environment for Prolog systems, named Yet Another Prolog Test Bench Environment (YAPTBE), aimed to assist developers in the development and CI of Prolog systems. YAPTBE is based on a cloud computing architecture and relies on the Jenkins framework as well as a new Jenkins plugin to manage the underlying infrastructure. We present the key design and implementation aspects of YAPTBE and show its most important features, such as its graphical user interface (GUI) and the automated process that builds and runs Prolog systems and benchmarks. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Back to Top