Selected Papers from the 24th International Conference on Information and Software Technologies (ICIST 2018)

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (31 January 2019) | Viewed by 83287

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
Interests: disease diagnostics using artificial intelligence methods
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Applied Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland
Interests: computational intellgence; neural networks; image processing; expert systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The ICIST Conference is hosted by the biggest technical university in the Baltic States—Kaunas University of Technology (Lithuania). ICIST 2018 aims to bring together researchers, engineers, developers and practitioners from academia and industry, working in all major areas and interdisciplinary areas of Information Systems, Business Intelligence, Software Engineering and Information Technology Applications. The conference will feature original research and application papers on the theory, design and implementation of modern information systems, software systems and IT applications. In 2018, the conference will be held in Vilnius, on 4–6 October.

Selected papers that presented at the conference are invited to submit their extended versions to this Special Issue of the journal Computers. Submitted papers should be extended to the size of regular research or review articles with 50% extension of new results. All submitted papers will undergo our standard peer-review procedure. Accepted papers will be published in open access format in Computers and collected together on the Special Issue website. There are no page charges for this journal.

Please prepare and format your paper according to the Instructions for Authors. Use the LaTeX or Microsoft Word template file of the journal (both are available from the Instructions for Authors page). Manuscripts should be submitted online via our susy.mdpi.com editorial system.

Prof. Robertas Damaševičius
Dr. Marcin Woźniak
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 3225 KiB  
Article
Multilingual Ranking of Wikipedia Articles with Quality and Popularity Assessment in Different Topics
by Włodzimierz Lewoniewski, Krzysztof Węcel and Witold Abramowicz
Computers 2019, 8(3), 60; https://doi.org/10.3390/computers8030060 - 14 Aug 2019
Cited by 25 | Viewed by 32337
Abstract
On Wikipedia, articles about various topics can be created and edited independently in each language version. Therefore, the quality of information about the same topic depends on the language. Any interested user can improve an article and that improvement may depend on the [...] Read more.
On Wikipedia, articles about various topics can be created and edited independently in each language version. Therefore, the quality of information about the same topic depends on the language. Any interested user can improve an article and that improvement may depend on the popularity of the article. The goal of this study is to show what topics are best represented in different language versions of Wikipedia using results of quality assessment for over 39 million articles in 55 languages. In this paper, we also analyze how popular selected topics are among readers and authors in various languages. We used two approaches to assign articles to various topics. First, we selected 27 main multilingual categories and analyzed all their connections with sub-categories based on information extracted from over 10 million categories in 55 language versions. To classify the articles to one of the 27 main categories, we took into account over 400 million links from articles to over 10 million categories and over 26 million links between categories. In the second approach, we used data from DBpedia and Wikidata. We also showed how the results of the study can be used to build local and global rankings of the Wikipedia content. Full article
Show Figures

Figure 1

13 pages, 768 KiB  
Article
Homogenous Granulation and Its Epsilon Variant
by Krzysztof Ropiak and Piotr Artiemjew
Computers 2019, 8(2), 36; https://doi.org/10.3390/computers8020036 - 10 May 2019
Cited by 3 | Viewed by 5586
Abstract
In the era of Big data, there is still place for techniques which reduce the data size with maintenance of its internal knowledge. This problem is the main subject of research of a family of granulation techniques proposed by Polkowski. In our recent [...] Read more.
In the era of Big data, there is still place for techniques which reduce the data size with maintenance of its internal knowledge. This problem is the main subject of research of a family of granulation techniques proposed by Polkowski. In our recent works, we have developed new, really effective and simple techniques for decision approximation, homogenous granulation and epsilon homogenous granulation. The real problem in this family of methods was the choice of an effective parameter of approximation for any datasets. It was resolved by homogenous techniques. There is no need to estimate the optimal parameters of approximation for these methods, because those are set in a dynamic way according to the data internal indiscernibility level. In this work, we have presented an extension of the work presented at ICIST 2018 conference. We present results for homogenous and epsilon homogenous granulation with the comparison of its effectiveness. Full article
Show Figures

Figure 1

12 pages, 4470 KiB  
Article
The Application of Ant Colony Algorithms to Improving the Operation of Traction Rectifier Transformers
by Barbara Kulesz, Andrzej Sikora and Adam Zielonka
Computers 2019, 8(2), 28; https://doi.org/10.3390/computers8020028 - 28 Mar 2019
Cited by 2 | Viewed by 5466
Abstract
In this paper, we discuss a technical issue occurring in electric traction. Tram traction may use DC voltage; this is obtained by rectifying AC voltage supplied by the power grid. In the simplest design— one which is commonly used—only diode uncontrolled rectifiers are [...] Read more.
In this paper, we discuss a technical issue occurring in electric traction. Tram traction may use DC voltage; this is obtained by rectifying AC voltage supplied by the power grid. In the simplest design— one which is commonly used—only diode uncontrolled rectifiers are used. The rectified voltage is not smooth; it always contains a pulsating (AC) component. The amount of pulsation varies. It depends, among other factors, on the design of the transformer-rectifier set. In the 12-pulse system, we use a three-winding transformer, consisting of one primary winding and two secondary windings: one is delta-connected and the other is star-connected. The unbalance of secondary windings is an extra factor increasing the pulsation of DC voltage. To equalize secondary side voltages, a tap changer may be used. The setting of the tap changer is the question resolved in this paper; it is optimized by application of the ACO (ant colony optimization algorithm). We have analyzed different supply voltage variants, and in particular, distorted voltage containing 5th and 7th harmonics. The results of ant colony optimization application are described in this paper. Full article
Show Figures

Figure 1

16 pages, 3890 KiB  
Article
The Use of an Artificial Neural Network to Process Hydrographic Big Data during Surface Modeling
by Marta Wlodarczyk-Sielicka and Jacek Lubczonek
Computers 2019, 8(1), 26; https://doi.org/10.3390/computers8010026 - 14 Mar 2019
Cited by 9 | Viewed by 6014
Abstract
At the present time, spatial data are often acquired using varied remote sensing sensors and systems, which produce big data sets. One significant product from these data is a digital model of geographical surfaces, including the surface of the sea floor. To improve [...] Read more.
At the present time, spatial data are often acquired using varied remote sensing sensors and systems, which produce big data sets. One significant product from these data is a digital model of geographical surfaces, including the surface of the sea floor. To improve data processing, presentation, and management, it is often indispensable to reduce the number of data points. This paper presents research regarding the application of artificial neural networks to bathymetric data reductions. This research considers results from radial networks and self-organizing Kohonen networks. During reconstructions of the seabed model, the results show that neural networks with fewer hidden neurons than the number of data points can replicate the original data set, while the Kohonen network can be used for clustering during big geodata reduction. Practical implementations of neural networks capable of creating surface models and reducing bathymetric data are presented. Full article
Show Figures

Figure 1

16 pages, 3445 KiB  
Article
Concepts of a Modular System Architecture for Distributed Robotic Systems
by Uwe Jahn, Carsten Wolff and Peter Schulz
Computers 2019, 8(1), 25; https://doi.org/10.3390/computers8010025 - 14 Mar 2019
Cited by 12 | Viewed by 8613
Abstract
Modern robots often use more than one processing unit to solve the requirements in robotics. Robots are frequently designed in a modular manner to fulfill the possibility to be extended for future tasks. The use of multiple processing units leads to a distributed [...] Read more.
Modern robots often use more than one processing unit to solve the requirements in robotics. Robots are frequently designed in a modular manner to fulfill the possibility to be extended for future tasks. The use of multiple processing units leads to a distributed system within one single robot. Therefore, the system architecture is even more important than in single-computer robots. The presented concept of a modular and distributed system architecture was designed for robotic systems. The architecture is based on the Operator–Controller Module (OCM). This article describes the adaption of the distributed OCM for mobile robots considering the requirements on such robots, including, for example, real-time and safety constraints. The presented architecture splits the system hierarchically into a three-layer structure of controllers and operators. The controllers interact directly with all sensors and actuators within the system. For that reason, hard real-time constraints need to comply. The reflective operator, however, processes the information of the controllers, which can be done by model-based principles using state machines. The cognitive operator is used to optimize the system. The article also shows the exemplary design of the DAEbot, a self-developed robot, and discusses the experience of applying these concepts on this robot. Full article
Show Figures

Figure 1

14 pages, 3090 KiB  
Article
Natural Language Processing in OTF Computing: Challenges and the Need for Interactive Approaches
by Frederik S. Bäumer, Joschka Kersting and Michaela Geierhos
Computers 2019, 8(1), 22; https://doi.org/10.3390/computers8010022 - 6 Mar 2019
Cited by 3 | Viewed by 6684
Abstract
The vision of On-the-Fly (OTF) Computing is to compose and provide software services ad hoc, based on requirement descriptions in natural language. Since non-technical users write their software requirements themselves and in unrestricted natural language, deficits occur such as inaccuracy and incompleteness. These [...] Read more.
The vision of On-the-Fly (OTF) Computing is to compose and provide software services ad hoc, based on requirement descriptions in natural language. Since non-technical users write their software requirements themselves and in unrestricted natural language, deficits occur such as inaccuracy and incompleteness. These deficits are usually met by natural language processing methods, which have to face special challenges in OTF Computing because maximum automation is the goal. In this paper, we present current automatic approaches for solving inaccuracies and incompletenesses in natural language requirement descriptions and elaborate open challenges. In particular, we will discuss the necessity of domain-specific resources and show why, despite far-reaching automation, an intelligent and guided integration of end users into the compensation process is required. In this context, we present our idea of a chat bot that integrates users into the compensation process depending on the given circumstances. Full article
Show Figures

Figure 1

28 pages, 493 KiB  
Article
J48SS: A Novel Decision Tree Approach for the Handling of Sequential and Time Series Data
by Andrea Brunello, Enrico Marzano, Angelo Montanari and Guido Sciavicco
Computers 2019, 8(1), 21; https://doi.org/10.3390/computers8010021 - 5 Mar 2019
Cited by 19 | Viewed by 7476
Abstract
Temporal information plays a very important role in many analysis tasks, and can be encoded in at least two different ways. It can be modeled by discrete sequences of events as, for example, in the business intelligence domain, with the aim of tracking [...] Read more.
Temporal information plays a very important role in many analysis tasks, and can be encoded in at least two different ways. It can be modeled by discrete sequences of events as, for example, in the business intelligence domain, with the aim of tracking the evolution of customer behaviors over time. Alternatively, it can be represented by time series, as in the stock market to characterize price histories. In some analysis tasks, temporal information is complemented by other kinds of data, which may be represented by static attributes, e.g., categorical or numerical ones. This paper presents J48SS, a novel decision tree inducer capable of natively mixing static (i.e., numerical and categorical), sequential, and time series data for classification purposes. The novel algorithm is based on the popular C4.5 decision tree learner, and it relies on the concepts of frequent pattern extraction and time series shapelet generation. The algorithm is evaluated on a text classification task in a real business setting, as well as on a selection of public UCR time series datasets. Results show that it is capable of providing competitive classification performances, while generating highly interpretable models and effectively reducing the data preparation effort. Full article
Show Figures

Figure 1

16 pages, 4553 KiB  
Article
Sentiment Analysis of Lithuanian Texts Using Traditional and Deep Learning Approaches
by Jurgita Kapočiūtė-Dzikienė, Robertas Damaševičius and Marcin Woźniak
Computers 2019, 8(1), 4; https://doi.org/10.3390/computers8010004 - 1 Jan 2019
Cited by 56 | Viewed by 9603
Abstract
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were [...] Read more.
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were used with the features based on the lexical, morphological, and character information. The deep learning approaches were applied on the top of two types of word embeddings (Vord2Vec continuous bag-of-words with negative sampling and FastText). Both traditional and deep learning approaches had to solve the positive/negative/neutral sentiment classification task on the balanced and full dataset versions. The best deep learning results (reaching 0.706 of accuracy) were achieved on the full dataset with CNN applied on top of the FastText embeddings, replaced emoticons, and eliminated diacritics. The traditional machine learning approaches demonstrated the best performance (0.735 of accuracy) on the full dataset with the NBM method, replaced emoticons, restored diacritics, and lemma unigrams as features. Although traditional machine learning approaches were superior when compared to the deep learning methods; deep learning demonstrated good results when applied on the small datasets. Full article
Show Figures

Figure 1

Back to TopTop