Previous Issue

Table of Contents

Informatics, Volume 6, Issue 1 (March 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-12
Export citation of selected articles as:
Open AccessArticle Improvement in the Efficiency of a Distributed Multi-Label Text Classification Algorithm Using Infrastructure and Task-Related Data
Informatics 2019, 6(1), 12; https://doi.org/10.3390/informatics6010012
Received: 2 January 2019 / Revised: 27 February 2019 / Accepted: 9 March 2019 / Published: 18 March 2019
Viewed by 197 | PDF Full-text (2304 KB) | HTML Full-text | XML Full-text
Abstract
Distributed computing technologies allow a wide variety of tasks that use large amounts of data to be solved. Various paradigms and technologies are already widely used, but many of them are lacking when it comes to the optimization of resource usage. The aim [...] Read more.
Distributed computing technologies allow a wide variety of tasks that use large amounts of data to be solved. Various paradigms and technologies are already widely used, but many of them are lacking when it comes to the optimization of resource usage. The aim of this paper is to present the optimization methods used to increase the efficiency of distributed implementations of a text-mining model utilizing information about the text-mining task extracted from the data and information about the current state of the distributed environment obtained from a computational node, and to improve the distribution of the task on the distributed infrastructure. Two optimization solutions are developed and implemented, both based on the prediction of the expected task duration on the existing infrastructure. The solutions are experimentally evaluated in a scenario where a distributed tree-based multi-label classifier is built based on two standard text data collections. Full article
Figures

Figure 1

Open AccessArticle IGR Token-Raw Material and Ingredient Certification of Recipe Based Foods Using Smart Contracts
Informatics 2019, 6(1), 11; https://doi.org/10.3390/informatics6010011
Received: 24 November 2018 / Revised: 4 February 2019 / Accepted: 28 February 2019 / Published: 11 March 2019
Viewed by 309 | PDF Full-text (1534 KB) | HTML Full-text | XML Full-text
Abstract
The use of smart contracts and blockchain tokens to implement a consumer trustworthy ingredient certification scheme for commingled foods, i.e., recipe based, food products is described. The proposed framework allows ingredients that carry any desired property (including social or environmental customer perceived value) [...] Read more.
The use of smart contracts and blockchain tokens to implement a consumer trustworthy ingredient certification scheme for commingled foods, i.e., recipe based, food products is described. The proposed framework allows ingredients that carry any desired property (including social or environmental customer perceived value) to be certified by any certification authority, at the moment of harvest or extraction, using the IGR Ethereum token. The mechanism involves the transfer of tokens containing the internet url published at the authority’s web site from the farmer all along the supply chain to the final consumer at each transfer of custody of the ingredient using the Cricital Tracking Event/Key Data Elements (CTE/KDE) philosophy of the Institute of Food Technologists (IFT). This allows the end consumer to easily inspect and be assured of the origin of the ingredient by means of a mobile application. A successful code implementation of the framework was deployed, tested and is running as a beta version on the Ethereum live blockchain as the IGR token. The main contribution of the framework is the possibility to ensure the true origin of any instance or lot of ingredient within a recipe to the customer, without harming the food processor legitimate right to protect its recipes and suppliers. Full article
Figures

Figure 1

Open AccessArticle ETL Best Practices for Data Quality Checks in RIS Databases
Informatics 2019, 6(1), 10; https://doi.org/10.3390/informatics6010010
Received: 25 December 2018 / Revised: 24 February 2019 / Accepted: 27 February 2019 / Published: 5 March 2019
Viewed by 362 | PDF Full-text (2595 KB) | HTML Full-text | XML Full-text
Abstract
The topic of data integration from external data sources or independent IT-systems has received increasing attention recently in IT departments as well as at management level, in particular concerning data integration in federated database systems. An example of the latter are commercial research [...] Read more.
The topic of data integration from external data sources or independent IT-systems has received increasing attention recently in IT departments as well as at management level, in particular concerning data integration in federated database systems. An example of the latter are commercial research information systems (RIS), which regularly import, cleanse, transform and prepare the analysis research information of the institutions of a variety of databases. In addition, all these so-called steps must be provided in a secured quality. As several internal and external data sources are loaded for integration into the RIS, ensuring information quality is becoming increasingly challenging for the research institutions. Before the research information is transferred to a RIS, it must be checked and cleaned up. An important factor for successful or competent data integration is therefore always the data quality. The removal of data errors (such as duplicates and harmonization of the data structure, inconsistent data and outdated data, etc.) are essential tasks of data integration using extract, transform, and load (ETL) processes. Data is extracted from the source systems, transformed and loaded into the RIS. At this point conflicts between different data sources are controlled and solved, as well as data quality issues during data integration are eliminated. Against this background, our paper presents the process of data transformation in the context of RIS which gains an overview of the quality of research information in an institution’s internal and external data sources during its integration into RIS. In addition, the question of how to control and improve the quality issues during the integration process in RIS will be addressed. Full article
Figures

Figure 1

Open AccessArticle Using Malone’s Theoretical Model on Gamification for Designing Educational Rubrics
Received: 28 January 2019 / Revised: 19 February 2019 / Accepted: 27 February 2019 / Published: 4 March 2019
Viewed by 483 | PDF Full-text (1636 KB) | HTML Full-text | XML Full-text
Abstract
How could a structured proposal for an evaluation rubric benefit from assessing and including the organizational variables used when one of the first definitions of gamification related to game theory was established by Thomas W. Malone in 1980? By studying the importance and [...] Read more.
How could a structured proposal for an evaluation rubric benefit from assessing and including the organizational variables used when one of the first definitions of gamification related to game theory was established by Thomas W. Malone in 1980? By studying the importance and current validity of Malone’s corollaries on his article What makes things fun to Learn? this work covers all different characteristics of the concepts once used to define the term “gamification.” Based on the results of this analysis, we will propose different evaluation concepts that will be assessed and included in a qualitative proposal for an evaluation rubric, with the ultimate goal of including a holistic approach to all different aspects related to evaluation for active methodologies in a secondary education environment. Full article
Figures

Figure 1

Open AccessArticle Evaluating Awareness and Perception of Botnet Activity within Consumer Internet-of-Things (IoT) Networks
Received: 30 November 2018 / Revised: 31 January 2019 / Accepted: 11 February 2019 / Published: 18 February 2019
Viewed by 498 | PDF Full-text (1459 KB) | HTML Full-text | XML Full-text
Abstract
The growth of the Internet of Things (IoT), and demand for low-cost, easy-to-deploy devices, has led to the production of swathes of insecure Internet-connected devices. Many can be exploited and leveraged to perform large-scale attacks on the Internet, such as those seen by [...] Read more.
The growth of the Internet of Things (IoT), and demand for low-cost, easy-to-deploy devices, has led to the production of swathes of insecure Internet-connected devices. Many can be exploited and leveraged to perform large-scale attacks on the Internet, such as those seen by the Mirai botnet. This paper presents a cross-sectional study of how users value and perceive security and privacy in smart devices found within the IoT. It analyzes user requirements from IoT devices, and the importance placed upon security and privacy. An experimental setup was used to assess user ability to detect threats, in the context of technical knowledge and experience. It clearly demonstrated that without any clear signs when an IoT device was infected, it was very difficult for consumers to detect and be situationally aware of threats exploiting home networks. It also demonstrated that without adequate presentation of data to users, there is no clear correlation between level of technical knowledge and ability to detect infected devices. Full article
(This article belongs to the Special Issue Human Factors in Security and Privacy in IoT (HFSP-IoT))
Figures

Figure 1

Open AccessArticle What Is This Sensor and Does This App Need Access to It?
Received: 30 November 2018 / Revised: 9 January 2019 / Accepted: 18 January 2019 / Published: 24 January 2019
Viewed by 638 | PDF Full-text (1244 KB) | HTML Full-text | XML Full-text
Abstract
Mobile sensors have already proven to be helpful in different aspects of people’s everyday lives such as fitness, gaming, navigation, etc. However, illegitimate access to these sensors results in a malicious program running with an exploit path. While the users are benefiting from [...] Read more.
Mobile sensors have already proven to be helpful in different aspects of people’s everyday lives such as fitness, gaming, navigation, etc. However, illegitimate access to these sensors results in a malicious program running with an exploit path. While the users are benefiting from richer and more personalized apps, the growing number of sensors introduces new security and privacy risks to end users and makes the task of sensor management more complex. In this paper, first, we discuss the issues around the security and privacy of mobile sensors. We investigate the available sensors on mainstream mobile devices and study the permission policies that Android, iOS and mobile web browsers offer for them. Second, we reflect the results of two workshops that we organized on mobile sensor security. In these workshops, the participants were introduced to mobile sensors by working with sensor-enabled apps. We evaluated the risk levels perceived by the participants for these sensors after they understood the functionalities of these sensors. The results showed that knowing sensors by working with sensor-enabled apps would not immediately improve the users’ security inference of the actual risks of these sensors. However, other factors such as the prior general knowledge about these sensors and their risks had a strong impact on the users’ perception. We also taught the participants about the ways that they could audit their apps and their permissions. Our findings showed that when mobile users were provided with reasonable choices and intuitive teaching, they could easily self-direct themselves to improve their security and privacy. Finally, we provide recommendations for educators, app developers, and mobile users to contribute toward awareness and education on this topic. Full article
(This article belongs to the Special Issue Human Factors in Security and Privacy in IoT (HFSP-IoT))
Figures

Figure 1

Open AccessArticle Hybrid Design Tools—Image Quality Assessment of a Digitally Augmented Blackboard Integrated System
Received: 1 November 2018 / Revised: 14 January 2019 / Accepted: 19 January 2019 / Published: 21 January 2019
Viewed by 620 | PDF Full-text (9324 KB) | HTML Full-text | XML Full-text
Abstract
In the last two decades, Interactive White Boards (IWBs) have been widely available as a pedagogic tool. The usability of these boards for teaching disciplines where complex drawings are needed, we consider debatable in multiple regards. In a previous study, we proposed an [...] Read more.
In the last two decades, Interactive White Boards (IWBs) have been widely available as a pedagogic tool. The usability of these boards for teaching disciplines where complex drawings are needed, we consider debatable in multiple regards. In a previous study, we proposed an alternative to the IWBs as a blackboard augmented with a minimum of necessary digital elements. The current study continues our previous research on hybrid design tools, analyzing the limitations of the developed hybrid system regarding the perceived quality of the images being repeatedly captured, annotated, and reprojected onto the board. We validated the hybrid system by evaluating the quality of the projected and reprojected images over a blackboard, using both objective measurements and subjective human perception in extensive and realistic case studies. Based on the results achieved in the current research, we conclude that the proposed hybrid system provides good quality support for teaching disciplines that require complex drawings and board interaction. Full article
Figures

Figure 1

Open AccessArticle Statistical Deadband: A Novel Approach for Event-Based Data Reporting
Received: 5 December 2018 / Revised: 11 January 2019 / Accepted: 15 January 2019 / Published: 18 January 2019
Viewed by 662 | PDF Full-text (1384 KB) | HTML Full-text | XML Full-text
Abstract
Deadband algorithms are implemented inside industrial gateways to reduce the volume of data sent across different networks. By tuning the deadband sampling resolution by a preset interval Δ, it is possible to estimate the balance between the traffic rates of networks connected [...] Read more.
Deadband algorithms are implemented inside industrial gateways to reduce the volume of data sent across different networks. By tuning the deadband sampling resolution by a preset interval Δ , it is possible to estimate the balance between the traffic rates of networks connected by industrial SCADA gateways. This work describes the design and implementation of two original deadband algorithms based on statistical concepts derived by John Bollinger in his financial technical analysis. The statistical algorithms proposed do not require the setup of a preset interval—this is required by non-statistical algorithms. All algorithms were evaluated and compared by computing the effectiveness and fidelity over a public collection of random pseudo-periodic signals. The overall performance measured in the simulations showed better results, in terms of effectiveness and fidelity, for the statistical algorithms, while the measured computing resources were not as efficient as for the non-statistical deadband algorithms. Full article
Figures

Figure 1

Open AccessArticle Unstructured Text in EMR Improves Prediction of Death after Surgery in Children
Received: 28 October 2018 / Revised: 3 January 2019 / Accepted: 5 January 2019 / Published: 10 January 2019
Viewed by 805 | PDF Full-text (1681 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Text fields in electronic medical records (EMR) contain information on important factors that influence health outcomes, however, they are underutilized in clinical decision making due to their unstructured nature. We analyzed 6497 inpatient surgical cases with 719,308 free text notes from Le Bonheur [...] Read more.
Text fields in electronic medical records (EMR) contain information on important factors that influence health outcomes, however, they are underutilized in clinical decision making due to their unstructured nature. We analyzed 6497 inpatient surgical cases with 719,308 free text notes from Le Bonheur Children’s Hospital EMR. We used a text mining approach on preoperative notes to obtain a text-based risk score to predict death within 30 days of surgery. In addition, we evaluated the performance of a hybrid model that included the text-based risk score along with structured data pertaining to clinical risk factors. The C-statistic of a logistic regression model with five-fold cross-validation significantly improved from 0.76 to 0.92 when text-based risk scores were included in addition to structured data. We conclude that preoperative free text notes in EMR include significant information that can predict adverse surgery outcomes. Full article
(This article belongs to the Special Issue Data-Driven Healthcare Research)
Figures

Figure 1

Open AccessEditorial Acknowledgement to Reviewers of Informatics in 2018
Published: 9 January 2019
Viewed by 574 | PDF Full-text (123 KB) | HTML Full-text | XML Full-text
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
Open AccessArticle Bringing the Illusion of Reality Inside Museums—A Methodological Proposal for an Advanced Museology Using Holographic Showcases
Received: 28 October 2018 / Revised: 14 December 2018 / Accepted: 14 December 2018 / Published: 4 January 2019
Viewed by 784 | PDF Full-text (12836 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The basic idea of a hologram is an apparition of something that does not exist but appears as if it was just in front of our eyes. These illusion techniques were invented a long time ago. The philosopher and alchemist Giovanni Battista della [...] Read more.
The basic idea of a hologram is an apparition of something that does not exist but appears as if it was just in front of our eyes. These illusion techniques were invented a long time ago. The philosopher and alchemist Giovanni Battista della Porta invented an effect that was later developed and brought to fame by Prof. J. H. Pepper (1821–1900) and applied in theatrical performances. The innovation nowadays consists in the adopted technology to produce them. Taking advantage of the available digital technologies, the challenge we are going to discuss is using holograms in the museum context, inside showcases, to realize a new form of scenography and dramaturgy around the exhibited objects. Case studies will be presented, with a detailed analysis of the EU project CEMEC (Connecting Early Medieval European Collections), where holographic showcases have been designed, built and experimented in EU museums. In this case, the coexistence in the same space of the real artifact and the virtual contents, and interior setup of the showcase, its dynamic lighting system, the script and the sound, converge to create an expressive unity. The reconstruction of sensory and symbolic dimensions that are ‘beyond’ any museum object can take the visitor in the middle of a lively and powerful experience with such technology, and represents an advancement in the museological sector. User experience results and a list of best practices will be presented in the second part of the paper, out of the tests and research activities conducted in these three years of the project. Full article
(This article belongs to the Section Digital Humanities)
Figures

Figure 1

Open AccessArticle Improving the Classification Efficiency of an ANN Utilizing a New Training Methodology
Received: 31 October 2018 / Revised: 10 December 2018 / Accepted: 24 December 2018 / Published: 28 December 2018
Cited by 1 | Viewed by 654 | PDF Full-text (871 KB) | HTML Full-text | XML Full-text
Abstract
In this work, a new approach for training artificial neural networks is presented which utilises techniques for solving the constraint optimisation problem. More specifically, this study converts the training of a neural network into a constraint optimisation problem. Furthermore, we propose a new [...] Read more.
In this work, a new approach for training artificial neural networks is presented which utilises techniques for solving the constraint optimisation problem. More specifically, this study converts the training of a neural network into a constraint optimisation problem. Furthermore, we propose a new neural network training algorithm based on the L-BFGS-B method. Our numerical experiments illustrate the classification efficiency of the proposed algorithm and of our proposed methodology, leading to more efficient, stable and robust predictive models. Full article
(This article belongs to the Special Issue Advances in Randomized Neural Networks)
Figures

Figure 1

Informatics EISSN 2227-9709 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top