Special Issue "Selected Papers from the 24th International Conference on Information and Software Technologies (ICIST 2018)"

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (31 January 2019)

Special Issue Editors

Guest Editor
Prof. Robertas Damaševičius

Kaunas University of Technology, Kaunas, Lithuania
Website | E-Mail
Interests: sustainable software engineering; IT applications; smart learning; gamification patterns
Guest Editor
Dr. Marcin Woźniak

Faculty of Applied Mathematics, Silesian University of Technology, Gliwice, Poland
Website | E-Mail
Phone: 0048 032 237 13 41
Interests: computational intelligence; heuristic methods; neural networks

Special Issue Information

Dear Colleagues,

The ICIST Conference is hosted by the biggest technical university in the Baltic States—Kaunas University of Technology (Lithuania). ICIST 2018 aims to bring together researchers, engineers, developers and practitioners from academia and industry, working in all major areas and interdisciplinary areas of Information Systems, Business Intelligence, Software Engineering and Information Technology Applications. The conference will feature original research and application papers on the theory, design and implementation of modern information systems, software systems and IT applications. In 2018, the conference will be held in Vilnius, on 4–6 October.

Selected papers that presented at the conference are invited to submit their extended versions to this Special Issue of the journal Computers. Submitted papers should be extended to the size of regular research or review articles with 50% extension of new results. All submitted papers will undergo our standard peer-review procedure. Accepted papers will be published in open access format in Computers and collected together on the Special Issue website. There are no page charges for this journal.

Please prepare and format your paper according to the Instructions for Authors. Use the LaTeX or Microsoft Word template file of the journal (both are available from the Instructions for Authors page). Manuscripts should be submitted online via our susy.mdpi.com editorial system.

Prof. Robertas Damaševičius
Dr. Marcin Woźniak
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 550 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

View options order results:
result details:
Displaying articles 1-5
Export citation of selected articles as:

Research

Open AccessArticle The Use of an Artificial Neural Network to Process Hydrographic Big Data during Surface Modeling
Received: 30 January 2019 / Revised: 5 March 2019 / Accepted: 11 March 2019 / Published: 14 March 2019
PDF Full-text (3890 KB) | HTML Full-text | XML Full-text
Abstract
At the present time, spatial data are often acquired using varied remote sensing sensors and systems, which produce big data sets. One significant product from these data is a digital model of geographical surfaces, including the surface of the sea floor. To improve [...] Read more.
At the present time, spatial data are often acquired using varied remote sensing sensors and systems, which produce big data sets. One significant product from these data is a digital model of geographical surfaces, including the surface of the sea floor. To improve data processing, presentation, and management, it is often indispensable to reduce the number of data points. This paper presents research regarding the application of artificial neural networks to bathymetric data reductions. This research considers results from radial networks and self-organizing Kohonen networks. During reconstructions of the seabed model, the results show that neural networks with fewer hidden neurons than the number of data points can replicate the original data set, while the Kohonen network can be used for clustering during big geodata reduction. Practical implementations of neural networks capable of creating surface models and reducing bathymetric data are presented. Full article
Figures

Figure 1

Open AccessArticle Concepts of a Modular System Architecture for Distributed Robotic Systems
Received: 31 January 2019 / Revised: 7 March 2019 / Accepted: 11 March 2019 / Published: 14 March 2019
PDF Full-text (3445 KB) | HTML Full-text | XML Full-text
Abstract
Modern robots often use more than one processing unit to solve the requirements in robotics. Robots are frequently designed in a modular manner to fulfill the possibility to be extended for future tasks. The use of multiple processing units leads to a distributed [...] Read more.
Modern robots often use more than one processing unit to solve the requirements in robotics. Robots are frequently designed in a modular manner to fulfill the possibility to be extended for future tasks. The use of multiple processing units leads to a distributed system within one single robot. Therefore, the system architecture is even more important than in single-computer robots. The presented concept of a modular and distributed system architecture was designed for robotic systems. The architecture is based on the Operator–Controller Module (OCM). This article describes the adaption of the distributed OCM for mobile robots considering the requirements on such robots, including, for example, real-time and safety constraints. The presented architecture splits the system hierarchically into a three-layer structure of controllers and operators. The controllers interact directly with all sensors and actuators within the system. For that reason, hard real-time constraints need to comply. The reflective operator, however, processes the information of the controllers, which can be done by model-based principles using state machines. The cognitive operator is used to optimize the system. The article also shows the exemplary design of the DAEbot, a self-developed robot, and discusses the experience of applying these concepts on this robot. Full article
Figures

Figure 1

Open AccessArticle Natural Language Processing in OTF Computing: Challenges and the Need for Interactive Approaches
Received: 22 January 2019 / Revised: 23 February 2019 / Accepted: 3 March 2019 / Published: 6 March 2019
PDF Full-text (3090 KB) | HTML Full-text | XML Full-text
Abstract
The vision of On-the-Fly (OTF) Computing is to compose and provide software services ad hoc, based on requirement descriptions in natural language. Since non-technical users write their software requirements themselves and in unrestricted natural language, deficits occur such as inaccuracy and incompleteness. These [...] Read more.
The vision of On-the-Fly (OTF) Computing is to compose and provide software services ad hoc, based on requirement descriptions in natural language. Since non-technical users write their software requirements themselves and in unrestricted natural language, deficits occur such as inaccuracy and incompleteness. These deficits are usually met by natural language processing methods, which have to face special challenges in OTF Computing because maximum automation is the goal. In this paper, we present current automatic approaches for solving inaccuracies and incompletenesses in natural language requirement descriptions and elaborate open challenges. In particular, we will discuss the necessity of domain-specific resources and show why, despite far-reaching automation, an intelligent and guided integration of end users into the compensation process is required. In this context, we present our idea of a chat bot that integrates users into the compensation process depending on the given circumstances. Full article
Figures

Figure 1

Open AccessArticle J48SS: A Novel Decision Tree Approach for the Handling of Sequential and Time Series Data
Received: 11 February 2019 / Revised: 25 February 2019 / Accepted: 27 February 2019 / Published: 5 March 2019
PDF Full-text (493 KB) | HTML Full-text | XML Full-text
Abstract
Temporal information plays a very important role in many analysis tasks, and can be encoded in at least two different ways. It can be modeled by discrete sequences of events as, for example, in the business intelligence domain, with the aim of tracking [...] Read more.
Temporal information plays a very important role in many analysis tasks, and can be encoded in at least two different ways. It can be modeled by discrete sequences of events as, for example, in the business intelligence domain, with the aim of tracking the evolution of customer behaviors over time. Alternatively, it can be represented by time series, as in the stock market to characterize price histories. In some analysis tasks, temporal information is complemented by other kinds of data, which may be represented by static attributes, e.g., categorical or numerical ones. This paper presents J48SS, a novel decision tree inducer capable of natively mixing static (i.e., numerical and categorical), sequential, and time series data for classification purposes. The novel algorithm is based on the popular C4.5 decision tree learner, and it relies on the concepts of frequent pattern extraction and time series shapelet generation. The algorithm is evaluated on a text classification task in a real business setting, as well as on a selection of public UCR time series datasets. Results show that it is capable of providing competitive classification performances, while generating highly interpretable models and effectively reducing the data preparation effort. Full article
Figures

Figure 1

Open AccessArticle Sentiment Analysis of Lithuanian Texts Using Traditional and Deep Learning Approaches
Received: 27 November 2018 / Revised: 21 December 2018 / Accepted: 24 December 2018 / Published: 1 January 2019
PDF Full-text (4553 KB) | HTML Full-text | XML Full-text
Abstract
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were [...] Read more.
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were used with the features based on the lexical, morphological, and character information. The deep learning approaches were applied on the top of two types of word embeddings (Vord2Vec continuous bag-of-words with negative sampling and FastText). Both traditional and deep learning approaches had to solve the positive/negative/neutral sentiment classification task on the balanced and full dataset versions. The best deep learning results (reaching 0.706 of accuracy) were achieved on the full dataset with CNN applied on top of the FastText embeddings, replaced emoticons, and eliminated diacritics. The traditional machine learning approaches demonstrated the best performance (0.735 of accuracy) on the full dataset with the NBM method, replaced emoticons, restored diacritics, and lemma unigrams as features. Although traditional machine learning approaches were superior when compared to the deep learning methods; deep learning demonstrated good results when applied on the small datasets. Full article
Figures

Figure 1

Computers EISSN 2073-431X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top