Next Issue
Previous Issue

Table of Contents

Informatics, Volume 5, Issue 3 (September 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-9
Export citation of selected articles as:
Open AccessArticle Self-Adaptive Multi-Sensor Activity Recognition Systems Based on Gaussian Mixture Models
Informatics 2018, 5(3), 38; https://doi.org/10.3390/informatics5030038
Received: 8 June 2018 / Revised: 13 September 2018 / Accepted: 14 September 2018 / Published: 19 September 2018
Viewed by 444 | PDF Full-text (564 KB) | HTML Full-text | XML Full-text
Abstract
Personal wearables such as smartphones or smartwatches are increasingly utilized in everyday life. Frequently, activity recognition is performed on these devices to estimate the current user status and trigger automated actions according to the user’s needs. In this article, we focus on the
[...] Read more.
Personal wearables such as smartphones or smartwatches are increasingly utilized in everyday life. Frequently, activity recognition is performed on these devices to estimate the current user status and trigger automated actions according to the user’s needs. In this article, we focus on the creation of a self-adaptive activity recognition system based on IMU that includes new sensors during runtime. Starting with a classifier based on GMM, the density model is adapted to new sensor data fully autonomously by issuing the marginalization property of normal distributions. To create a classifier from that, label inference is done, either based on the initial classifier or based on the training data. For evaluation, we used more than 10 h of annotated activity data from the publicly available PAMAP2 benchmark dataset. Using the data, we showed the feasibility of our approach and performed 9720 experiments, to get resilient numbers. One approach performed reasonably well, leading to a system improvement on average, with an increase in the F-score of 0.0053, while the other one shows clear drawbacks due to a high loss of information during label inference. Furthermore, a comparison with state of the art techniques shows the necessity for further experiments in this area. Full article
(This article belongs to the Special Issue Sensor-Based Activity Recognition and Interaction)
Figures

Figure 1

Open AccessCommunication The STELAR ICU: Leveraging Electronic Health Record Data to Foster Research and Optimize Patient Care
Informatics 2018, 5(3), 37; https://doi.org/10.3390/informatics5030037
Received: 31 July 2018 / Revised: 31 August 2018 / Accepted: 6 September 2018 / Published: 7 September 2018
Viewed by 620 | PDF Full-text (190 KB) | HTML Full-text | XML Full-text
Abstract
Electronic health records (EHR) combined with robust data collection systems can be used to simultaneously drive research and performance improvement initiatives. Our Smart, Transformative, EHR-based Approaches to Revolutionizing the Intensive Care Unit (STELAR ICU) consists of a framework of five best practices that
[...] Read more.
Electronic health records (EHR) combined with robust data collection systems can be used to simultaneously drive research and performance improvement initiatives. Our Smart, Transformative, EHR-based Approaches to Revolutionizing the Intensive Care Unit (STELAR ICU) consists of a framework of five best practices that make optimal use of objective data to guide clinicians caring for the sickest patients in our quaternary center. Our strategy has relied on an accessible data infrastructure, standardizing without protocolizing care, using technology to increase patient contact and time spent at the bedside, continuously re-evaluating performance in real-time, and acknowledging uncertainty by using electronic data to provide probabilistic weight to clinical decision-making. These strategies blur the lines between research and quality improvement, with the aim of achieving truly stellar patient outcomes. Full article
(This article belongs to the Special Issue Data-Driven Healthcare Research)
Open AccessArticle An Empirical Study on Importance of Modeling Parameters and Trading Volume-Based Features in Daily Stock Trading Using Neural Networks
Informatics 2018, 5(3), 36; https://doi.org/10.3390/informatics5030036
Received: 20 June 2018 / Revised: 2 August 2018 / Accepted: 14 August 2018 / Published: 17 August 2018
Viewed by 676 | PDF Full-text (4476 KB) | HTML Full-text | XML Full-text
Abstract
There have been many machine learning-based studies to forecast stock price trends. These studies attempted to extract input features mostly from the price information with little focus on the trading volume information. In addition, modeling parameters to specify a learning problem have not
[...] Read more.
There have been many machine learning-based studies to forecast stock price trends. These studies attempted to extract input features mostly from the price information with little focus on the trading volume information. In addition, modeling parameters to specify a learning problem have not been intensively investigated. We herein develop an improved method by handling those limitations. Specifically, we generated input variables by considering both price and volume information with even weight. We also defined three modeling parameters: the input and the target window sizes and the profit threshold. These specify the input and target variables, between which the underlying functions are learned by multilayer perceptrons and support vector machines. We tested our approach over six stocks and 15 years and compared with the expected performance over all considered parameter specifications. Our approach dramatically improved the prediction accuracy over the expected performance. In addition, our approach was shown to be stably more profitable than both the expected performance and the buy-and-hold strategy. On the other hand, the performance was degraded when the input variables generated from the trading volume were excluded from learning. All these results validate the importance of the volume and the modeling parameters in stock trading prediction. Full article
Figures

Figure 1

Open AccessArticle Exploiting Past Users’ Interests and Predictions in an Active Learning Method for Dealing with Cold Start in Recommender Systems
Informatics 2018, 5(3), 35; https://doi.org/10.3390/informatics5030035
Received: 29 March 2018 / Revised: 2 August 2018 / Accepted: 6 August 2018 / Published: 15 August 2018
Viewed by 746 | PDF Full-text (915 KB) | HTML Full-text | XML Full-text
Abstract
This paper focuses on the new users cold-start issue in the context of recommender systems. New users who do not receive pertinent recommendations may abandon the system. In order to cope with this issue, we use active learning techniques. These methods engage the
[...] Read more.
This paper focuses on the new users cold-start issue in the context of recommender systems. New users who do not receive pertinent recommendations may abandon the system. In order to cope with this issue, we use active learning techniques. These methods engage the new users to interact with the system by presenting them with a questionnaire that aims to understand their preferences to the related items. In this paper, we propose an active learning technique that exploits past users’ interests and past users’ predictions in order to identify the best questions to ask. Our technique achieves a better performance in terms of precision (RMSE), which leads to learn the users’ preferences in less questions. The experimentations were carried out in a small and public dataset to prove the applicability for handling cold start issues. Full article
(This article belongs to the Special Issue Advances in Recommender Systems)
Figures

Figure 1

Open AccessArticle Temporal and Atemporal Provider Network Analysis in a Breast Cancer Cohort from an Academic Medical Center (USA)
Informatics 2018, 5(3), 34; https://doi.org/10.3390/informatics5030034
Received: 30 May 2018 / Revised: 30 July 2018 / Accepted: 31 July 2018 / Published: 6 August 2018
Viewed by 835 | PDF Full-text (1095 KB) | HTML Full-text | XML Full-text
Abstract
Social network analysis (SNA) is a quantitative approach to study relationships between individuals. Current SNA methods use static models of organizations, which simplify network dynamics. To better represent the dynamic nature of clinical care, we developed a temporal social network analysis model to
[...] Read more.
Social network analysis (SNA) is a quantitative approach to study relationships between individuals. Current SNA methods use static models of organizations, which simplify network dynamics. To better represent the dynamic nature of clinical care, we developed a temporal social network analysis model to better represent care temporality. We applied our model to appointment data from a single institution for early stage breast cancer patients. Our cohort of 4082 patients were treated by 2190 providers. Providers had 54,695 unique relationships when calculated using our temporal method, compared to 249,075 when calculated using the atemporal method. We found that traditional atemporal approaches to network modeling overestimate the number of provider-provider relationships and underestimate common network measures such as care density within a network. Social network analysis, when modeled accurately, is a powerful tool for organizational research within the healthcare domain. Full article
(This article belongs to the Special Issue Data-Driven Healthcare Research)
Figures

Figure 1

Open AccessArticle Modeling Analytical Streams for Social Business Intelligence
Informatics 2018, 5(3), 33; https://doi.org/10.3390/informatics5030033
Received: 25 June 2018 / Revised: 23 July 2018 / Accepted: 23 July 2018 / Published: 1 August 2018
Viewed by 831 | PDF Full-text (4137 KB) | HTML Full-text | XML Full-text
Abstract
Social Business Intelligence (SBI) enables companies to capture strategic information from public social networks. Contrary to traditional Business Intelligence (BI), SBI has to face the high dynamicity of both the social network’s contents and the company’s analytical requests, as well as the enormous
[...] Read more.
Social Business Intelligence (SBI) enables companies to capture strategic information from public social networks. Contrary to traditional Business Intelligence (BI), SBI has to face the high dynamicity of both the social network’s contents and the company’s analytical requests, as well as the enormous amount of noisy data. Effective exploitation of these continuous sources of data requires efficient processing of the streamed data to be semantically shaped into insightful facts. In this paper, we propose a multidimensional formalism to represent and evaluate social indicators directly from fact streams derived in turn from social network data. This formalism relies on two main aspects: the semantic representation of facts via Linked Open Data and the support of OLAP-like multidimensional analysis models. Contrary to traditional BI formalisms, we start the process by modeling the required social indicators according to the strategic goals of the company. From these specifications, all the required fact streams are modeled and deployed to trace the indicators. The main advantages of this approach are the easy definition of on-demand social indicators, and the treatment of changing dimensions and metrics through streamed facts. We demonstrate its usefulness by introducing a real scenario user case in the automotive sector. Full article
(This article belongs to the Special Issue Data Modeling for Big Data Analytics)
Figures

Figure 1

Open AccessArticle Mobile Phones Help Develop Listening Skills
Informatics 2018, 5(3), 32; https://doi.org/10.3390/informatics5030032
Received: 29 April 2018 / Revised: 28 June 2018 / Accepted: 25 July 2018 / Published: 27 July 2018
Cited by 1 | Viewed by 953 | PDF Full-text (208 KB) | HTML Full-text | XML Full-text
Abstract
Listening is one of the most difficult language skills among the four communication competences; however, it has received much less time in English learning than the other three (reading, writing, and speaking). Also, listening is often claimed to be a passive skill in
[...] Read more.
Listening is one of the most difficult language skills among the four communication competences; however, it has received much less time in English learning than the other three (reading, writing, and speaking). Also, listening is often claimed to be a passive skill in the classroom, as learners seem to sit quietly and listen to dialogues. As language teachers, we are constantly striving to create the conditions under which our students can learn and succeed. At the same time, we meet challenges that may be detrimental to the learning process. This certainly applies to mobile phone use on the part of our students. It is a well-known fact that practically every student has at least one mobile device, as it has become a very convenient tool to get information. Unfortunately, students still prefer to use smart devices as entertainment, either to listen to music, watch films, or play computer games; it seems they really do not know how to use them in the process of education. This paper presents a review of how to get over difficulties in listening, and develop listening skills with the help of mobile phones outside the classroom. We have realized that to study English using mobile phones can consolidate our students’ understanding of what is being presented, or further contextualize the language to improve their ability to use it in communicative practice. To study English supposes this process to be non-durable, i.e., not only in the classroom under the guidance of the teacher. So, to study with the help of mobile technologies and handheld gadgets is a good opportunity to improve the quality and effectiveness of English learning. Full article
Open AccessFeature PaperArticle A Review and Characterization of Progressive Visual Analytics
Informatics 2018, 5(3), 31; https://doi.org/10.3390/informatics5030031
Received: 16 May 2018 / Revised: 27 June 2018 / Accepted: 30 June 2018 / Published: 3 July 2018
Viewed by 1173 | PDF Full-text (1540 KB) | HTML Full-text | XML Full-text
Abstract
Progressive Visual Analytics (PVA) has gained increasing attention over the past years. It brings the user into the loop during otherwise long-running and non-transparent computations by producing intermediate partial results. These partial results can be shown to the user for early and continuous
[...] Read more.
Progressive Visual Analytics (PVA) has gained increasing attention over the past years. It brings the user into the loop during otherwise long-running and non-transparent computations by producing intermediate partial results. These partial results can be shown to the user for early and continuous interaction with the emerging end result even while it is still being computed. Yet as clear-cut as this fundamental idea seems, the existing body of literature puts forth various interpretations and instantiations that have created a research domain of competing terms, various definitions, as well as long lists of practical requirements and design guidelines spread across different scientific communities. This makes it more and more difficult to get a succinct understanding of PVA’s principal concepts, let alone an overview of this increasingly diverging field. The review and discussion of PVA presented in this paper address these issues and provide (1) a literature collection on this topic, (2) a conceptual characterization of PVA, as well as (3) a consolidated set of practical recommendations for implementing and using PVA-based visual analytics solutions. Full article
Figures

Graphical abstract

Open AccessArticle Designing the Learning Experiences in Serious Games: The Overt and the Subtle—The Virtual Clinic Learning Environment
Informatics 2018, 5(3), 30; https://doi.org/10.3390/informatics5030030
Received: 24 May 2018 / Revised: 20 June 2018 / Accepted: 26 June 2018 / Published: 29 June 2018
Viewed by 1098 | PDF Full-text (2418 KB) | HTML Full-text | XML Full-text
Abstract
Serious Games are becoming more common in the educational setting and must pass muster with both students and instructors for their learning experience and knowledge building. The Virtual Clinic Learning Environment has recently been developed and implemented at East Carolina University using a
[...] Read more.
Serious Games are becoming more common in the educational setting and must pass muster with both students and instructors for their learning experience and knowledge building. The Virtual Clinic Learning Environment has recently been developed and implemented at East Carolina University using a design framework based on Bloom’s variables, and in the process of refining those design questions, identifies the methods of how serious games provide an overt and subtle learning experience. The overt learning experience is based in the design questions defined and the subtle experience was derived by examining the idea of sense of place as it relates to the virtual environment. By considering these two streams of learning, designers can avoid pitfalls and build on these design elements of a virtual learning environment. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality for Edutainment)
Figures

Figure 1

Back to Top