Previous Issue

Table of Contents

Information, Volume 10, Issue 7 (July 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-21
Export citation of selected articles as:
Open AccessArticle
When Relational-Based Applications Go to NoSQL Databases: A Survey
Information 2019, 10(7), 241; https://doi.org/10.3390/info10070241 (registering DOI)
Received: 22 May 2019 / Revised: 1 July 2019 / Accepted: 12 July 2019 / Published: 16 July 2019
PDF Full-text (2502 KB) | HTML Full-text | XML Full-text
Abstract
Several data-centric applications today produce and manipulate a large volume of data, the so-called Big Data. Traditional databases, in particular, relational databases, are not suitable for Big Data management. As a consequence, some approaches that allow the definition and manipulation of large relational [...] Read more.
Several data-centric applications today produce and manipulate a large volume of data, the so-called Big Data. Traditional databases, in particular, relational databases, are not suitable for Big Data management. As a consequence, some approaches that allow the definition and manipulation of large relational data sets stored in NoSQL databases through an SQL interface have been proposed, focusing on scalability and availability. This paper presents a comparative analysis of these approaches based on an architectural classification that organizes them according to their system architectures. Our motivation is that wrapping is a relevant strategy for relational-based applications that intend to move relational data to NoSQL databases (usually maintained in the cloud). We also claim that this research area has some open issues, given that most approaches deal with only a subset of SQL operations or give support to specific target NoSQL databases. Our intention with this survey is, therefore, to contribute to the state-of-art in this research area and also provide a basis for choosing or even designing a relational-to-NoSQL data wrapping solution. Full article
Figures

Figure 1

Open AccessEditorial
Special Issue “MoDAT: Designing the Market of Data”
Information 2019, 10(7), 240; https://doi.org/10.3390/info10070240
Received: 10 July 2019 / Accepted: 10 July 2019 / Published: 13 July 2019
Viewed by 184 | PDF Full-text (136 KB) | HTML Full-text | XML Full-text
Abstract
The fifth International Workshop on the Market of Data (MoDAT2017) was held on November 18th, 2017 in New Orleans, USA in conjunction with IEEE ICDM 2017 [...] Full article
(This article belongs to the Special Issue MoDAT: Designing the Market of Data)
Open AccessArticle
Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information
Information 2019, 10(7), 239; https://doi.org/10.3390/info10070239
Received: 28 May 2019 / Revised: 26 June 2019 / Accepted: 27 June 2019 / Published: 11 July 2019
Viewed by 194 | PDF Full-text (942 KB) | Supplementary Files
Abstract
In multi-modal emotion aware frameworks, it is essential to estimate the emotional features then fuse them to different degrees. This basically follows either a feature-level or decision-level strategy. In all likelihood, while features from several modalities may enhance the classification performance, they might [...] Read more.
In multi-modal emotion aware frameworks, it is essential to estimate the emotional features then fuse them to different degrees. This basically follows either a feature-level or decision-level strategy. In all likelihood, while features from several modalities may enhance the classification performance, they might exhibit high dimensionality and make the learning process complex for the most used machine learning algorithms. To overcome issues of feature extraction and multi-modal fusion, hybrid fuzzy-evolutionary computation methodologies are employed to demonstrate ultra-strong capability of learning features and dimensionality reduction. This paper proposes a novel multi-modal emotion aware system by fusing speech with EEG modalities. Firstly, a mixing feature set of speaker-dependent and independent characteristics is estimated from speech signal. Further, EEG is utilized as inner channel complementing speech for more authoritative recognition, by extracting multiple features belonging to time, frequency, and time–frequency. For classifying unimodal data of either speech or EEG, a hybrid fuzzy c-means-genetic algorithm-neural network model is proposed, where its fitness function finds the optimal fuzzy cluster number reducing the classification error. To fuse speech with EEG information, a separate classifier is used for each modality, then output is computed by integrating their posterior probabilities. Results show the superiority of the proposed model, where the overall performance in terms of accuracy average rates is 98.06%, and 97.28%, and 98.53% for EEG, speech, and multi-modal recognition, respectively. The proposed model is also applied to two public databases for speech and EEG, namely: SAVEE and MAHNOB, which achieve accuracies of 98.21% and 98.26%, respectively. Full article
Open AccessArticle
On the Use of Mobile Devices as Controllers for First-Person Navigation in Public Installations
Information 2019, 10(7), 238; https://doi.org/10.3390/info10070238
Received: 10 June 2019 / Revised: 8 July 2019 / Accepted: 8 July 2019 / Published: 11 July 2019
Viewed by 205 | PDF Full-text (3426 KB) | HTML Full-text | XML Full-text
Abstract
User navigation in public installations displaying 3D content is mostly supported by mid-air interactions using motion sensors, such as Microsoft Kinect. On the other hand, smartphones have been used as external controllers of large-screen installations or game environments, and they may also be [...] Read more.
User navigation in public installations displaying 3D content is mostly supported by mid-air interactions using motion sensors, such as Microsoft Kinect. On the other hand, smartphones have been used as external controllers of large-screen installations or game environments, and they may also be effective in supporting 3D navigations. This paper aims to examine whether a smartphone-based control is a reliable alternative to mid-air interaction for four degrees of freedom (4-DOF) fist-person navigation, and to discover suitable interaction techniques for a smartphone controller. For this purpose, we setup two studies: A comparative study between smartphone-based and Kinect-based navigation, and a gesture elicitation study to collect user preferences and intentions regarding 3D navigation methods using a smartphone. The results of the first study were encouraging, as users with smartphone input performed at least as good as with Kinect and most of them preferred it as a means of control, whilst the second study produced a number of noteworthy results regarding proposed user gestures and their stance towards using a mobile phone for 3D navigation. Full article
(This article belongs to the Special Issue Wearable Augmented and Mixed Reality Applications)
Figures

Figure 1

Open AccessArticle
Interactions and Sentiment in Personal Finance Forums: An Exploratory Analysis
Information 2019, 10(7), 237; https://doi.org/10.3390/info10070237
Received: 18 June 2019 / Revised: 8 July 2019 / Accepted: 9 July 2019 / Published: 10 July 2019
Viewed by 174 | PDF Full-text (1197 KB) | HTML Full-text | XML Full-text
Abstract
The kinds of interactions taking place in an online personal finance forum and the sentiments expressed in its posts may influence the diffusion and usefulness of those forums. We explore a set of major threads on a personal finance forum to assess the [...] Read more.
The kinds of interactions taking place in an online personal finance forum and the sentiments expressed in its posts may influence the diffusion and usefulness of those forums. We explore a set of major threads on a personal finance forum to assess the degree of participation of posters and the prevailing sentiments. The participation appears to be dominated by a small number of posters, with the most frequent poster contributing even more than a third of all posts. Just a small fraction of all possible direct interactions actually take place. Dominance is also confirmed by the large presence of self-replies (i.e., a poster submitting several posts in succession) and rejoinders (i.e., a poster counter-replying to another poster). Though trust is the prevailing sentiment, anger and fear appear to be present as well, though at a lower level, revealing that posts exhibit both aggressive and defensive tones. Full article
Figures

Figure 1

Open AccessArticle
Author Cooperation Network in Biology and Chemistry Literature during 2014–2018: Construction and Structural Characteristics
Information 2019, 10(7), 236; https://doi.org/10.3390/info10070236
Received: 30 April 2019 / Revised: 7 June 2019 / Accepted: 24 June 2019 / Published: 9 July 2019
Viewed by 250 | PDF Full-text (3678 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
How to explore the interaction between an individual researcher and others in scientific research, find out the degree of association among individual researchers, and evaluate the contribution of researchers to the whole according to the mechanism and law of interaction, is of great [...] Read more.
How to explore the interaction between an individual researcher and others in scientific research, find out the degree of association among individual researchers, and evaluate the contribution of researchers to the whole according to the mechanism and law of interaction, is of great significance to grasp the overall trend of the field. Scholars mostly use bibliometrics to solve these problems and analyze the citation and cooperation among academic achievements from the dimension of “quantity”. However, there is still no mature method for scholars to explore the evolution of knowledge and the relationship between authors; this paper tries to fill this gap. We narrow down the scope of research and focus the research content on the literature in biology and chemistry, collect all the papers from PubMed system (a very comprehensive authoritative database of biomedical papers) during 2014–2018, and take year as a specific analysis unit so as to improve the accuracy of the analysis. Then, we construct the author cooperation networks. Finally, through the above methods and steps, we identify the core authors of each year, analyze the recent cooperative relationships among authors, and predict some changes in the cooperative relationship among the authors based on the networks’ analytical data, evaluating and estimating the role that authors play in the overall field. Therefore, we expect that the cooperative authorship networks supported by the complex network theory can better explain the author’s cooperative relationship. Full article
Figures

Figure 1

Open AccessArticle
Beyond Open Data Hackathons: Exploring Digital Innovation Success
Information 2019, 10(7), 235; https://doi.org/10.3390/info10070235
Received: 1 June 2019 / Revised: 5 July 2019 / Accepted: 6 July 2019 / Published: 9 July 2019
Viewed by 213 | PDF Full-text (214 KB) | HTML Full-text | XML Full-text
Abstract
Previous researchers have examined the motivations of developers to participate in hackathons events and the challenges of open data hackathons, but limited studies have focused on the preparation and evaluation of these contests. Thus, the purpose of this paper is to examine factors [...] Read more.
Previous researchers have examined the motivations of developers to participate in hackathons events and the challenges of open data hackathons, but limited studies have focused on the preparation and evaluation of these contests. Thus, the purpose of this paper is to examine factors that lead to the effective implementation and success of open data hackathons and innovation contests. Six case studies of open data hackathons and innovation contests held between 2014 and 2018 in Thessaloniki were studied in order to identify the factors leading to the success of hackathon contests using criteria from the existing literature. The results show that the most significant factors were clear problem definition, mentors’ participation to the contest, level of support to participants by mentors in order to launch their applications to the market, jury members’ knowledge and experience, the entry requirements of the competition, and the participation of companies, data providers, and academics. Furthermore, organizers should take team members’ competences and skills, as well as the support of post-launch activities for applications, into consideration. This paper can be of interest to organizers of hackathon events because they could be knowledgeable about the factors that should take into consideration for the successful implementation of these events. Full article
(This article belongs to the Special Issue Linked Open Data)
Open AccessArticle
A Proximity-Based Semantic Enrichment Approach of Volunteered Geographic Information: A Study Case of Waste of Water
Information 2019, 10(7), 234; https://doi.org/10.3390/info10070234
Received: 22 May 2019 / Revised: 19 June 2019 / Accepted: 26 June 2019 / Published: 8 July 2019
Viewed by 250 | PDF Full-text (1412 KB) | HTML Full-text | XML Full-text
Abstract
Volunteered geographic information (VGI) refers to geospatial data that is collected and/or shared voluntarily over the Internet. Its use, however, presents many limitations, such as data quality, difficulty in use and recovery. One alternative to improve its use is to use semantic enrichment, [...] Read more.
Volunteered geographic information (VGI) refers to geospatial data that is collected and/or shared voluntarily over the Internet. Its use, however, presents many limitations, such as data quality, difficulty in use and recovery. One alternative to improve its use is to use semantic enrichment, which is a process to assign semantic resources to metadata and data. This study proposes a VGI semantic enrichment method using linked data and thesaurus. The method has two stages, one automatic and one manual. The automatic stage links VGI contributions to places that are of interest to users. In the manual stage, a thesaurus in the hydric domain was built based on terms found in VGI. Finally, a process is proposed, which returns semantically similar VGI contributions based on queries made by users. To verify the viability of the proposed method, contributions from the VGI system Gota D’Água, related to water waste prevention, were used. Full article
(This article belongs to the Special Issue Linked Open Data)
Figures

Figure 1

Open AccessArticle
A Novel Approach for Web Service Recommendation Based on Advanced Trust Relationships
Information 2019, 10(7), 233; https://doi.org/10.3390/info10070233
Received: 14 May 2019 / Revised: 18 June 2019 / Accepted: 24 June 2019 / Published: 6 July 2019
Viewed by 279 | PDF Full-text (2100 KB) | HTML Full-text | XML Full-text
Abstract
Service recommendation is one of the important means of service selection. Aiming at the problems of ignoring the influence of typical data sources such as service information and interaction logs on the similarity calculation of user preferences and insufficient consideration of dynamic trust [...] Read more.
Service recommendation is one of the important means of service selection. Aiming at the problems of ignoring the influence of typical data sources such as service information and interaction logs on the similarity calculation of user preferences and insufficient consideration of dynamic trust relationship in traditional trust-based Web service recommendation methods, a novel approach for Web service recommendation based on advanced trust relationships is presented. After considering the influence of indirect trust paths, the improved calculation about indirect trust degree is proposed. By quantifying the popularity of service, the method of calculating user preference similarity is investigated. Furthermore, the dynamic adjustment mechanism of trust is designed by differentiating the effect of each service recommendation. Integrating these efforts, a service recommendation mechanism is introduced, in which a new service recommendation algorithm is described. Experimental results show that, compared with existing methods, the proposed approach not only has higher accuracy of service recommendation, but also can resist attacks from malicious users more effectively. Full article
(This article belongs to the Special Issue Computational Social Science)
Figures

Figure 1

Open AccessArticle
Idempotent Factorizations of Square-Free Integers
Information 2019, 10(7), 232; https://doi.org/10.3390/info10070232
Received: 20 June 2019 / Accepted: 3 July 2019 / Published: 6 July 2019
Viewed by 248 | PDF Full-text (311 KB) | HTML Full-text | XML Full-text
Abstract
We explore the class of positive integers n that admit idempotent factorizations n=p¯q¯ such that λ(n)(p¯1)(q¯1), where λ is the Carmichael [...] Read more.
We explore the class of positive integers n that admit idempotent factorizations n = p ¯ q ¯ such that λ ( n ) ( p ¯ 1 ) ( q ¯ 1 ) , where λ is the Carmichael lambda function. Idempotent factorizations with p ¯ and q ¯ prime have received the most attention due to their cryptographic advantages, but there are an infinite number of n with idempotent factorizations containing composite p ¯ and/or q ¯ . Idempotent factorizations are exactly those p ¯ and q ¯ that generate correctly functioning keys in the Rivest–Shamir–Adleman (RSA) 2-prime protocol with n as the modulus. While the resulting p ¯ and q ¯ have no cryptographic utility and therefore should never be employed in that capacity, idempotent factorizations warrant study in their own right as they live at the intersection of multiple hard problems in computer science and number theory. We present some analytical results here. We also demonstrate the existence of maximally idempotent integers, those n for which all bipartite factorizations are idempotent. We show how to construct them, and present preliminary results on their distribution. Full article
Figures

Figure 1

Open AccessArticle
Assisting Forensic Identification through Unsupervised Information Extraction of Free Text Autopsy Reports: The Disappearances Cases during the Brazilian Military Dictatorship
Information 2019, 10(7), 231; https://doi.org/10.3390/info10070231
Received: 31 May 2019 / Revised: 27 June 2019 / Accepted: 3 July 2019 / Published: 5 July 2019
Viewed by 301 | PDF Full-text (7236 KB) | HTML Full-text | XML Full-text
Abstract
Anthropological, archaeological, and forensic studies situate enforced disappearance as a strategy associated with the Brazilian military dictatorship (1964–1985), leaving hundreds of persons without identity or cause of death identified. Their forensic reports are the only existing clue for people identification and detection of [...] Read more.
Anthropological, archaeological, and forensic studies situate enforced disappearance as a strategy associated with the Brazilian military dictatorship (1964–1985), leaving hundreds of persons without identity or cause of death identified. Their forensic reports are the only existing clue for people identification and detection of possible crimes associated with them. The exchange of information among institutions about the identities of disappeared people was not a common practice. Thus, their analysis requires unsupervised techniques, mainly due to the fact that their contextual annotation is extremely time-consuming, difficult to obtain, and with high dependence on the annotator. The use of these techniques allows researchers to assist in the identification and analysis in four areas: Common causes of death, relevant body locations, personal belongings terminology, and correlations between actors such as doctors and police officers involved in the disappearances. This paper analyzes almost 3000 textual reports of missing persons in São Paulo city during the Brazilian dictatorship through unsupervised algorithms of information extraction in Portuguese, identifying named entities and relevant terminology associated with these four criteria. The analysis allowed us to observe terminological patterns relevant for people identification (e.g., presence of rings or similar personal belongings) and automate the study of correlations between actors. The proposed system acts as a first classificatory and indexing middleware of the reports and represents a feasible system that can assist researchers working in pattern search among autopsy reports. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Figures

Figure 1

Open AccessEditorial
Editorial for the Special Issue on “Modern Recommender Systems: Approaches, Challenges and Applications”
Information 2019, 10(7), 230; https://doi.org/10.3390/info10070230
Received: 2 July 2019 / Accepted: 2 July 2019 / Published: 4 July 2019
Viewed by 280 | PDF Full-text (143 KB) | HTML Full-text | XML Full-text
Abstract
Recommender systems are nowadays an indispensable part of most personalized systems implementing information access and content delivery, supporting a great variety of user activities [...] Full article
(This article belongs to the Special Issue Modern Recommender Systems: Approaches, Challenges and Applications)
Open AccessArticle
Visualization Method for Arbitrary Cutting of Finite Element Data Based on Radial-Basis Functions
Information 2019, 10(7), 229; https://doi.org/10.3390/info10070229
Received: 11 May 2019 / Revised: 30 June 2019 / Accepted: 1 July 2019 / Published: 3 July 2019
Viewed by 265 | PDF Full-text (2417 KB) | HTML Full-text | XML Full-text
Abstract
Finite element data form an important basis for engineers to undertake analysis and research. In most cases, it is difficult to generate the internal sections of finite element data and professional operations are required. To display the internal data of entities, a method [...] Read more.
Finite element data form an important basis for engineers to undertake analysis and research. In most cases, it is difficult to generate the internal sections of finite element data and professional operations are required. To display the internal data of entities, a method for generating the arbitrary sections of finite element data based on radial basis function (RBF) interpolation is proposed in this paper. The RBF interpolation function is used to realize arbitrary surface cutting of the entity, and the section can be generated by the triangulation of discrete tangent points. Experimental studies have proved that the method is very convenient for allowing users to obtain visualization results for an arbitrary section through simple and intuitive interactions. Full article
Figures

Figure 1

Open AccessArticle
Multilingual Open Information Extraction: Challenges and Opportunities
Information 2019, 10(7), 228; https://doi.org/10.3390/info10070228
Received: 1 May 2019 / Revised: 17 June 2019 / Accepted: 29 June 2019 / Published: 2 July 2019
Viewed by 326 | PDF Full-text (415 KB) | HTML Full-text | XML Full-text
Abstract
The number of documents published on the Web in languages other than English grows every year. As a consequence, the need to extract useful information from different languages increases, highlighting the importance of research into Open Information Extraction (OIE) techniques. Different OIE methods [...] Read more.
The number of documents published on the Web in languages other than English grows every year. As a consequence, the need to extract useful information from different languages increases, highlighting the importance of research into Open Information Extraction (OIE) techniques. Different OIE methods have dealt with features from a unique language; however, few approaches tackle multilingual aspects. In those approaches, multilingualism is restricted to processing text in different languages, rather than exploring cross-linguistic resources, which results in low precision due to the use of general rules. Multilingual methods have been applied to numerous problems in Natural Language Processing, achieving satisfactory results and demonstrating that knowledge acquisition for a language can be transferred to other languages to improve the quality of the facts extracted. We argue that a multilingual approach can enhance OIE methods as it is ideal to evaluate and compare OIE systems, and therefore can be applied to the collected facts. In this work, we discuss how the transfer knowledge between languages can increase acquisition from multilingual approaches. We provide a roadmap of the Multilingual Open IE area concerning state of the art studies. Additionally, we evaluate the transfer of knowledge to improve the quality of the facts extracted in each language. Moreover, we discuss the importance of a parallel corpus to evaluate and compare multilingual systems. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Figures

Figure 1

Open AccessArticle
Visualizing the Knowledge Structure and Research Evolution of Infrared Detection Technology Studies
Information 2019, 10(7), 227; https://doi.org/10.3390/info10070227
Received: 15 May 2019 / Revised: 25 June 2019 / Accepted: 27 June 2019 / Published: 1 July 2019
Viewed by 339 | PDF Full-text (6723 KB) | HTML Full-text | XML Full-text
Abstract
This paper aims to explore the current status, research trends and hotspots related to the field of infrared detection technology through bibliometric analysis and visualization techniques based on the Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) articles published between [...] Read more.
This paper aims to explore the current status, research trends and hotspots related to the field of infrared detection technology through bibliometric analysis and visualization techniques based on the Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) articles published between 1990 and 2018 using the VOSviewer and Citespace software tools. Based on our analysis, we first present the spatiotemporal distribution of the literature related to infrared detection technology, including annual publications, origin country/region, main research organization, and source publications. Then, we report the main subject categories involved in infrared detection technology. Furthermore, we adopt literature cocitation, author cocitation, keyword co-occurrence and timeline visualization analyses to visually explore the research fronts and trends, and present the evolution of infrared detection technology research. The results show that China, the USA and Italy are the three most active countries in infrared detection technology research and that the Centre National de la Recherche Scientifique has the largest number of publications among related organizations. The most prominent research hotspots in the past five years are vibration thermal imaging, pulse thermal imaging, photonic crystals, skin temperature, remote sensing technology, and detection of delamination defects in concrete. The trend of future research on infrared detection technology is from qualitative to quantitative research development, engineering application research and infrared detection technology combined with other detection techniques. The proposed approach based on the scientific knowledge graph analysis can be used to establish reference information and a research basis for application and development of methods in the domain of infrared detection technology studies. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessReview
Big Data Analytics and Firm Performance: A Systematic Review
Information 2019, 10(7), 226; https://doi.org/10.3390/info10070226
Received: 25 May 2019 / Revised: 22 June 2019 / Accepted: 27 June 2019 / Published: 1 July 2019
Viewed by 338 | PDF Full-text (3212 KB) | HTML Full-text | XML Full-text
Abstract
The literature on big data analytics and firm performance is still fragmented and lacking in attempts to integrate the current studies’ results. This study aims to provide a systematic review of contributions related to big data analytics and firm performance. The authors assess [...] Read more.
The literature on big data analytics and firm performance is still fragmented and lacking in attempts to integrate the current studies’ results. This study aims to provide a systematic review of contributions related to big data analytics and firm performance. The authors assess papers listed in the Web of Science index. This study identifies the factors that may influence the adoption of big data analytics in various parts of an organization and categorizes the diverse types of performance that big data analytics can address. Directions for future research are developed from the results. This systematic review proposes to create avenues for both conceptual and empirical research streams by emphasizing the importance of big data analytics in improving firm performance. In addition, this review offers both scholars and practitioners an increased understanding of the link between big data analytics and firm performance. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle
Eligibility of BPMN Models for Business Process Redesign
Information 2019, 10(7), 225; https://doi.org/10.3390/info10070225
Received: 28 May 2019 / Revised: 26 June 2019 / Accepted: 27 June 2019 / Published: 1 July 2019
Viewed by 331 | PDF Full-text (737 KB) | HTML Full-text | XML Full-text
Abstract
Business process redesign (BPR) is an organizational initiative for achieving competitive multi-faceted advantages regarding business processes, in terms of cycle time, quality, cost, customer satisfaction and other critical performance metrics. In spite of the fact that BPR tools and methodologies are increasingly being [...] Read more.
Business process redesign (BPR) is an organizational initiative for achieving competitive multi-faceted advantages regarding business processes, in terms of cycle time, quality, cost, customer satisfaction and other critical performance metrics. In spite of the fact that BPR tools and methodologies are increasingly being adopted, process innovation efforts have proven ineffective in delivering the expected outcome. This paper investigates the eligibility of BPMN process models towards the application of redesign methods inspired by data-flow communities. In previous work, the transformation of a business process model to a directed acyclic graph (DAG) has yielded notable optimization results for determining average performance of process executions consisting of ad-hoc processes. Still, the utilization encountered drawbacks due to a lack of input specification, complexity assessment and normalization of the BPMN model and application to more generic business process cases. This paper presents an assessment mechanism that measures the eligibility of a BPMN model and its capability to be effectively transformed to a DAG and be further subjected to data-centric workflow optimization methods. The proposed mechanism evaluates the model type, complexity metrics, normalization and optimization capability of candidate process models, while at the same time allowing users to set their desired complexity thresholds. An indicative example is used to demonstrate the assessment phases and to illustrate the usability of the proposed mechanism towards the advancement and facilitation of the optimization phase. Finally, the authors review BPMN models from both an SOA-based business process design (BPD) repository and relevant literature and assess their eligibility. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle
A Two-Stage Household Electricity Demand Estimation Approach Based on Edge Deep Sparse Coding
Information 2019, 10(7), 224; https://doi.org/10.3390/info10070224
Received: 16 May 2019 / Revised: 10 June 2019 / Accepted: 24 June 2019 / Published: 1 July 2019
Viewed by 278 | PDF Full-text (2980 KB) | HTML Full-text | XML Full-text
Abstract
The widespread popularity of smart meters enables the collection of an immense amount of fine-grained data, thereby realizing a two-way information flow between the grid and the customer, along with personalized interaction services, such as precise demand response. These services basically rely on [...] Read more.
The widespread popularity of smart meters enables the collection of an immense amount of fine-grained data, thereby realizing a two-way information flow between the grid and the customer, along with personalized interaction services, such as precise demand response. These services basically rely on the accurate estimation of electricity demand, and the key challenge lies in the high volatility and uncertainty of load profiles and the tremendous communication pressure on the data link or computing center. This study proposed a novel two-stage approach for estimating household electricity demand based on edge deep sparse coding. In the first sparse coding stage, the status of electrical devices was introduced into the deep non-negative k-means-singular value decomposition (K-SVD) sparse algorithm to estimate the behavior of customers. The patterns extracted in the first stage were used to train the long short-term memory (LSTM) network and forecast household electricity demand in the subsequent 30 min. The developed method was implemented on the Python platform and tested on AMPds dataset. The proposed method outperformed the multi-layer perception (MLP) by 51.26%, the autoregressive integrated moving average model (ARIMA) by 36.62%, and LSTM with shallow K-SVD by 16.4% in terms of mean absolute percent error (MAPE). In the field of mean absolute error and root mean squared error, the improvement was 53.95% and 36.73% compared with MLP, 28.47% and 23.36% compared with ARIMA, 11.38% and 18.16% compared with LSTM with shallow K-SVD. The results of the experiments demonstrated that the proposed method can provide considerable and stable improvement in household electricity demand estimation. Full article
Figures

Figure 1

Open AccessArticle
Multiple Goal Linear Programming-Based Decision Preference Inconsistency Recognition and Adjustment Strategies
Information 2019, 10(7), 223; https://doi.org/10.3390/info10070223
Received: 25 April 2019 / Revised: 27 May 2019 / Accepted: 27 June 2019 / Published: 1 July 2019
Viewed by 277 | PDF Full-text (271 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this paper is to enrich the decision preference information inconsistency check and adjustment method in the context of capacity-based multiple criteria decision making. We first show that almost all the preference information of a decision maker can be represented as [...] Read more.
The purpose of this paper is to enrich the decision preference information inconsistency check and adjustment method in the context of capacity-based multiple criteria decision making. We first show that almost all the preference information of a decision maker can be represented as a collection of linear constraints. By introducing the positive and negative deviations, we construct the the multiple goal linear programming (MGLP)-based inconsistency recognition model to find out the redundant and contradicting constraints. Then, based on the redundancy and contradiction degrees, we propose three types of adjustment strategies and accordingly adopt some explicit and implicit indices w.r.t. the capacity to test the implementation effect of the adjustment strategy. The empirical analyses verify that all the strategies are competent in the adjustment task, and the second strategy usually costs relatively less effort. It is shown that the MGLP-based inconsistency recognition and adjustment method needs less background knowledge and is applicable for dealing with some complicated decision preference information. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle
Hadoop Performance Analysis Model with Deep Data Locality
Information 2019, 10(7), 222; https://doi.org/10.3390/info10070222
Received: 11 June 2019 / Revised: 23 June 2019 / Accepted: 26 June 2019 / Published: 27 June 2019
Viewed by 364 | PDF Full-text (5061 KB) | HTML Full-text | XML Full-text
Abstract
Background: Hadoop has become the base framework on the big data system via the simple concept that moving computation is cheaper than moving data. Hadoop increases a data locality in the Hadoop Distributed File System (HDFS) to improve the performance of the system. [...] Read more.
Background: Hadoop has become the base framework on the big data system via the simple concept that moving computation is cheaper than moving data. Hadoop increases a data locality in the Hadoop Distributed File System (HDFS) to improve the performance of the system. The network traffic among nodes in the big data system is reduced by increasing a data-local on the machine. Traditional research increased the data-local on one of the MapReduce stages to increase the Hadoop performance. However, there is currently no mathematical performance model for the data locality on the Hadoop. Methods: This study made the Hadoop performance analysis model with data locality for analyzing the entire process of MapReduce. In this paper, the data locality concept on the map stage and shuffle stage was explained. Also, this research showed how to apply the Hadoop performance analysis model to increase the performance of the Hadoop system by making the deep data locality. Results: This research proved the deep data locality for increasing performance of Hadoop via three tests, such as, a simulation base test, a cloud test and a physical test. According to the test, the authors improved the Hadoop system by over 34% by using the deep data locality. Conclusions: The deep data locality improved the Hadoop performance by reducing the data movement in HDFS. Full article
(This article belongs to the Special Issue Big Data Research, Development, and Applications––Big Data 2018)
Figures

Figure 1

Open AccessArticle
Analysis of Usability for the Dice CAPTCHA
Information 2019, 10(7), 221; https://doi.org/10.3390/info10070221
Received: 15 June 2019 / Revised: 23 June 2019 / Accepted: 24 June 2019 / Published: 26 June 2019
Viewed by 334 | PDF Full-text (874 KB) | HTML Full-text | XML Full-text
Abstract
This paper explores the usability of the Dice CAPTCHA via analysis of the time spent to solve the CAPTCHA, and number of tries for solving the CAPTCHA. The experiment was conducted on a set of 197 subjects who use the Internet, and are [...] Read more.
This paper explores the usability of the Dice CAPTCHA via analysis of the time spent to solve the CAPTCHA, and number of tries for solving the CAPTCHA. The experiment was conducted on a set of 197 subjects who use the Internet, and are discriminated by age, daily Internet usage in hours, Internet experience in years, and type of device where a solution to the CAPTCHA is found. Each user was asked to find a solution to the Dice CAPTCHA on a tablet or laptop, and the time to successfully find a solution to the CAPTCHA for a given number of attempts was registered. Analysis was performed on the collected data via association rule mining and artificial neural network. It revealed that the time to find a solution in a given number of attempts of the CAPTCHA depended on different combinations of values of user’s features, as well as the most meaningful features influencing the solution time. In addition, this dependence was explored through prediction of the CAPTCHA solution time from the user’s features via artificial neural network. The obtained results are very helpful to analyze the combination of features having an influence on the CAPTCHA solution, and consequently, to find the CAPTCHA mostly complying to the postulate of “ideal” test. Full article
(This article belongs to the Special Issue Artificial Intelligence—Methodology, Systems, and Applications)
Figures

Figure 1

Information EISSN 2078-2489 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top