Previous Issue

Table of Contents

Information, Volume 9, Issue 11 (November 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-27
Export citation of selected articles as:
Open AccessArticle A Hybrid Swarm Intelligent Neural Network Model for Customer Churn Prediction and Identifying the Influencing Factors
Information 2018, 9(11), 288; https://doi.org/10.3390/info9110288 (registering DOI)
Received: 23 September 2018 / Revised: 3 November 2018 / Accepted: 5 November 2018 / Published: 17 November 2018
Viewed by 95 | PDF Full-text (848 KB)
Abstract
Customer churn is one of the most challenging problems for telecommunication companies. In fact, this is because customers are considered as the real asset for the companies. Therefore, more companies are increasing their investments in developing practical solutions that aim at predicting customer
[...] Read more.
Customer churn is one of the most challenging problems for telecommunication companies. In fact, this is because customers are considered as the real asset for the companies. Therefore, more companies are increasing their investments in developing practical solutions that aim at predicting customer churn before it happens. Identifying which customer is about to churn will significantly help the companies in providing solutions to keep their customers and optimize their marketing campaigns. In this work, an intelligent hybrid model based on Particle Swarm Optimization and Feedforward neural network is proposed for churn prediction. PSO is used to tune the weights of the input features and optimize the structure of the neural network simultaneously to increase the prediction power. In addition, the proposed model handles the imbalanced class distribution of the data using an advanced oversampling technique. Evaluation results show that the proposed model can significantly improve the coverage rate of churn customers in comparison with other state-of-the-art classifiers. Moreover, the model has high interpretability, where the assigned feature weights can give an indicator about the importance of their corresponding features in the classification process. Full article
(This article belongs to the Section Artificial Intelligence)
Open AccessArticle Quantifying Bicycle Network Connectivity in Lisbon Using Open Data
Information 2018, 9(11), 287; https://doi.org/10.3390/info9110287 (registering DOI)
Received: 31 October 2018 / Revised: 14 November 2018 / Accepted: 15 November 2018 / Published: 17 November 2018
Viewed by 172 | PDF Full-text (10360 KB) | HTML Full-text | XML Full-text
Abstract
Stimulating non-motorized transport has been a key point on sustainable mobility agendas for cities around the world. Lisbon is no exception, as it invests in the implementation of new bike infrastructure. Quantifying the connectivity of such a bicycle network can help evaluate its
[...] Read more.
Stimulating non-motorized transport has been a key point on sustainable mobility agendas for cities around the world. Lisbon is no exception, as it invests in the implementation of new bike infrastructure. Quantifying the connectivity of such a bicycle network can help evaluate its current state and highlight specific challenges that should be addressed. Therefore, the aim of this study is to develop an exploratory score that allows a quantification of the bicycle network connectivity in Lisbon based on open data. For each part of the city, a score was computed based on how many common destinations (e.g., schools, universities, supermarkets, hospitals) were located within an acceptable biking distance when using only bicycle lanes and roads with low traffic stress for cyclists. Taking a weighted average of these scores resulted in an overall score for the city of Lisbon of only 8.6 out of 100 points. This shows, at a glance, that the city still has a long way to go before achieving their objectives regarding bicycle use in the city. Full article
Figures

Figure 1

Open AccessArticle Decentralized Transaction Mechanism Based on Smart Contract in Distributed Data Storage
Information 2018, 9(11), 286; https://doi.org/10.3390/info9110286 (registering DOI)
Received: 1 October 2018 / Revised: 1 November 2018 / Accepted: 14 November 2018 / Published: 17 November 2018
Viewed by 103 | PDF Full-text (988 KB)
Abstract
Distributed data storage has received more attention due to its advantages in reliability, availability and scalability, and it brings both opportunities and challenges for distributed data storage transaction. The traditional transaction system of storage resources, which generally runs in a centralized mode, results
[...] Read more.
Distributed data storage has received more attention due to its advantages in reliability, availability and scalability, and it brings both opportunities and challenges for distributed data storage transaction. The traditional transaction system of storage resources, which generally runs in a centralized mode, results in high cost, vendor lock-in and single point failure risk. To overcome the above shortcomings, considering the storage policy with erasure coding, in this paper we propose a decentralized transaction method for cloud storage based on a smart contract, which takes into account the resource cost for distributed data storage. First, to guarantee the availability and decrease the storing cost, a reverse Vickrey-Clarke-Groves (VCG) based auction mechanism is proposed for storage resource selection and transaction. Then we deploy and implement the proposed mechanism by designing a corresponding smart contract. Especially, we address the problem of how to implement a VCG-like mechanism in a blockchain environment. Based on the private chain of Ethereum, we make the simulation for the proposed storage transaction method. The results of simulation show that the proposed transaction model can realize competitive trading of storage resources and ensure the safe and economic operation of resource trading. Full article
(This article belongs to the Special Issue BlockChain and Smart Contracts)
Open AccessArticle Visualising Business Data: A Survey
Information 2018, 9(11), 285; https://doi.org/10.3390/info9110285 (registering DOI)
Received: 25 September 2018 / Revised: 30 October 2018 / Accepted: 9 November 2018 / Published: 17 November 2018
Viewed by 105 | PDF Full-text (11092 KB)
Abstract
A rapidly increasing number of businesses rely on visualisation solutions for their data management challenges. This demand stems from an industry-wide shift towards data-driven approaches to decision making and problem-solving. However, there is an overwhelming mass of heterogeneous data collected as a result.
[...] Read more.
A rapidly increasing number of businesses rely on visualisation solutions for their data management challenges. This demand stems from an industry-wide shift towards data-driven approaches to decision making and problem-solving. However, there is an overwhelming mass of heterogeneous data collected as a result. The analysis of these data become a critical and challenging part of the business process. Employing visual analysis increases data comprehension thus enabling a wider range of users to interpret the underlying behaviour, as opposed to skilled but expensive data analysts. Widening the reach to an audience with a broader range of backgrounds creates new opportunities for decision making, problem-solving, trend identification, and creative thinking. In this survey, we identify trends in business visualisation and visual analytic literature where visualisation is used to address data challenges and identify areas in which industries use visual design to develop their understanding of the business environment. Our novel classification of literature includes the topics of businesses intelligence, business ecosystem, customer-centric. This survey provides a valuable overview and insight into the business visualisation literature with a novel classification that highlights both mature and less developed research directions. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle Furthest-Pair-Based Decision Trees: Experimental Results on Big Data Classification
Information 2018, 9(11), 284; https://doi.org/10.3390/info9110284 (registering DOI)
Received: 25 October 2018 / Revised: 10 November 2018 / Accepted: 13 November 2018 / Published: 17 November 2018
Viewed by 130 | PDF Full-text (3444 KB) | HTML Full-text | XML Full-text
Abstract
Big Data classification has recently received a great deal of attention due to the main properties of Big Data, which are volume, variety, and velocity. The furthest-pair-based binary search tree (FPBST) shows a great potential for Big Data classification. This work attempts to
[...] Read more.
Big Data classification has recently received a great deal of attention due to the main properties of Big Data, which are volume, variety, and velocity. The furthest-pair-based binary search tree (FPBST) shows a great potential for Big Data classification. This work attempts to improve the performance the FPBST in terms of computation time, space consumed and accuracy. The major enhancement of the FPBST includes converting the resultant BST to a decision tree, in order to remove the need for the slow K-nearest neighbors (KNN), and to obtain a smaller tree, which is useful for memory usage, speeding both training and testing phases and increasing the classification accuracy. The proposed decision trees are based on calculating the probabilities of each class at each node using various methods; these probabilities are then used by the testing phase to classify an unseen example. The experimental results on some (small, intermediate and big) machine learning datasets show the efficiency of the proposed methods, in terms of space, speed and accuracy compared to the FPBST, which shows a great potential for further enhancements of the proposed methods to be used in practice. Full article
Figures

Figure 1

Open AccessArticle Triadic Structures in Interpersonal Communication
Information 2018, 9(11), 283; https://doi.org/10.3390/info9110283 (registering DOI)
Received: 25 September 2018 / Revised: 2 November 2018 / Accepted: 6 November 2018 / Published: 16 November 2018
Viewed by 113 | PDF Full-text (6084 KB) | HTML Full-text | XML Full-text
Abstract
Communication, which is information exchange between systems, is one of the basic information processes. To better understand communication and develop more efficient communication tools, it is important to have adequate and concise, static and dynamic, structured models of communication. The principal goal of
[...] Read more.
Communication, which is information exchange between systems, is one of the basic information processes. To better understand communication and develop more efficient communication tools, it is important to have adequate and concise, static and dynamic, structured models of communication. The principal goal of this paper is explication of the communication structures, formation of their adequate mathematical models and description of their dynamic interaction. Exploring communication in the context of structures and structural dynamics, we utilize the most fundamental structure in mathematics, nature and cognition, which is called a named set or a fundamental triad because this structure has been useful in a variety of areas including networks and networking, physics, information theory, mathematics, logic, database theory and practice, artificial intelligence, mathematical linguistics, epistemology and methodology of science, to mention but a few. In this paper, we apply the theory of named sets (fundamental triads) for description and analysis of interpersonal communication. As a result, we explicate and describe of various structural regularities of communication, many of which are triadic by their nature allowing more advanced and efficient organization of interpersonal communication. Full article
Open AccessArticle Neighborhood Attribute Reduction: A Multicriterion Strategy Based on Sample Selection
Information 2018, 9(11), 282; https://doi.org/10.3390/info9110282 (registering DOI)
Received: 16 September 2018 / Revised: 2 November 2018 / Accepted: 10 November 2018 / Published: 16 November 2018
Viewed by 99 | PDF Full-text (646 KB) | HTML Full-text | XML Full-text
Abstract
In the rough-set field, the objective of attribute reduction is to regulate the variations of measures by reducing redundant data attributes. However, most of the previous concepts of attribute reductions were designed by one and only one measure, which indicates that the obtained
[...] Read more.
In the rough-set field, the objective of attribute reduction is to regulate the variations of measures by reducing redundant data attributes. However, most of the previous concepts of attribute reductions were designed by one and only one measure, which indicates that the obtained reduct may fail to meet the constraints given by other measures. In addition, the widely used heuristic algorithm for computing a reduct requires to scan all samples in data, and then time consumption may be too high to be accepted if the size of the data is too large. To alleviate these problems, a framework of attribute reduction based on multiple criteria with sample selection is proposed in this paper. Firstly, cluster centroids are derived from data, and then samples that are far away from the cluster centroids can be selected. This step completes the process of sample selection for reducing data size. Secondly, multiple criteria-based attribute reduction was designed, and the heuristic algorithm was used over the selected samples for computing reduct in terms of multiple criteria. Finally, the experimental results over 12 UCI datasets show that the reducts obtained by our framework not only satisfy the constraints given by multiple criteria, but also provide better classification performance and less time consumption. Full article
Figures

Figure 1

Open AccessArticle Alignment: A Hybrid, Interactive and Collaborative Ontology and Entity Matching Service
Information 2018, 9(11), 281; https://doi.org/10.3390/info9110281
Received: 9 October 2018 / Revised: 3 November 2018 / Accepted: 12 November 2018 / Published: 15 November 2018
Viewed by 131 | PDF Full-text (1022 KB) | HTML Full-text | XML Full-text
Abstract
Ontology matching is an essential problem in the world of Semantic Web and other distributed, open world applications. Heterogeneity occurs as a result of diversity in tools, knowledge, habits, language, interests and usually the level of detail. Automated applications have been developed, implementing
[...] Read more.
Ontology matching is an essential problem in the world of Semantic Web and other distributed, open world applications. Heterogeneity occurs as a result of diversity in tools, knowledge, habits, language, interests and usually the level of detail. Automated applications have been developed, implementing diverse aligning techniques and similarity measures, with outstanding performance. However, there are use cases where automated linking fails and there must be involvement of the human factor in order to create, or not create, a link. In this paper we present Alignment, a collaborative, system aided, interactive ontology matching platform. Alignment offers a user-friendly environment for matching two ontologies with the aid of configurable similarity algorithms. Full article
(This article belongs to the Special Issue Knowledge Engineering and Semantic Web)
Figures

Figure 1

Open AccessFeature PaperArticle Predicting Cyber-Events by Leveraging Hacker Sentiment
Information 2018, 9(11), 280; https://doi.org/10.3390/info9110280 (registering DOI)
Received: 22 August 2018 / Revised: 12 November 2018 / Accepted: 13 November 2018 / Published: 15 November 2018
Viewed by 124 | PDF Full-text (1962 KB) | HTML Full-text | XML Full-text
Abstract
Recent high-profile cyber-attacks exemplify why organizations need better cyber-defenses. Cyber-threats are hard to accurately predict because attackers usually try to mask their traces. However, they often discuss exploits and techniques on hacking forums. The community behavior of the hackers may provide insights into
[...] Read more.
Recent high-profile cyber-attacks exemplify why organizations need better cyber-defenses. Cyber-threats are hard to accurately predict because attackers usually try to mask their traces. However, they often discuss exploits and techniques on hacking forums. The community behavior of the hackers may provide insights into the groups’ collective malicious activity. We propose a novel approach to predict cyber-events using sentiment analysis. We test our approach using cyber-attack data from two major business organizations. We consider three types of events: malicious software installation, malicious-destination visits, and malicious emails that surmounted the target organizations’ defenses. We construct predictive signals by applying sentiment analysis to hacker forum posts to better understand hacker behavior. We analyze over 400 K posts written between January 2016 and January 2018 on over 100 hacking forums both on the surface and dark web. We find that some forums have significantly more predictive power than others. Sentiment-based models that leverage specific forums can complement state-of-the-art time-series models on forecasting cyber-attacks weeks ahead of the events. Full article
(This article belongs to the Special Issue Darkweb Cyber Threat Intelligence Mining)
Figures

Figure 1

Open AccessArticle Smart Process Optimization and Adaptive Execution with Semantic Services in Cloud Manufacturing
Information 2018, 9(11), 279; https://doi.org/10.3390/info9110279
Received: 8 October 2018 / Revised: 3 November 2018 / Accepted: 9 November 2018 / Published: 13 November 2018
Viewed by 167 | PDF Full-text (2490 KB) | HTML Full-text | XML Full-text
Abstract
A new requirement for the manufacturing companies in Industry 4.0 is to be flexible with respect to changes in demands, requiring them to react rapidly and efficiently on the production capacities. Together with the trend to use Service-Oriented Architectures (SOA), this requirement induces
[...] Read more.
A new requirement for the manufacturing companies in Industry 4.0 is to be flexible with respect to changes in demands, requiring them to react rapidly and efficiently on the production capacities. Together with the trend to use Service-Oriented Architectures (SOA), this requirement induces a need for agile collaboration among supply chain partners, but also between different divisions or branches of the same company. In order to address this collaboration challenge, we propose a novel pragmatic approach for the process analysis, implementation and execution. This is achieved through sets of semantic annotations of business process models encoded into BPMN 2.0 extensions. Building blocks for such manufacturing processes are the individual available services, which are also semantically annotated according to the Everything-as-a-Service (XaaS) principles and stored into a common marketplace. The optimization of such manufacturing processes combines pattern-based semantic composition of services with their non-functional aspects. This is achieved by means of Quality-of-Service (QoS)-based Constraint Optimization Problem (COP) solving, resulting in an automatic implementation of service-based manufacturing processes. The produced solution is mapped back to the BPMN 2.0 standard formalism by means of the introduced extension elements, fully detailing the enactable optimal process service plan produced. This approach allows enacting a process instance, using just-in-time service leasing, allocation of resources and dynamic replanning in the case of failures. This proposition provides the best compromise between external visibility, control and flexibility. In this way, it provides an optimal approach for business process models’ implementation, with a full service-oriented taste, by implementing user-defined QoS metrics, just-in-time execution and basic dynamic repairing capabilities. This paper presents the described approach and the technical architecture and depicts one initial industrial application in the manufacturing domain of aluminum forging for bicycle hull body forming, where the advantages stemming from the main capabilities of this approach are sketched. Full article
(This article belongs to the Special Issue Knowledge Engineering and Semantic Web)
Figures

Figure 1

Open AccessArticle Prototyping a Traffic Light Recognition Device with Expert Knowledge
Information 2018, 9(11), 278; https://doi.org/10.3390/info9110278
Received: 27 September 2018 / Revised: 18 October 2018 / Accepted: 9 November 2018 / Published: 13 November 2018
Viewed by 109 | PDF Full-text (3002 KB) | HTML Full-text | XML Full-text
Abstract
Traffic light detection and recognition (TLR) research has grown every year. In addition, Machine Learning (ML) has been largely used not only in traffic light research but in every field where it is useful and possible to generalize data and automatize human behavior.
[...] Read more.
Traffic light detection and recognition (TLR) research has grown every year. In addition, Machine Learning (ML) has been largely used not only in traffic light research but in every field where it is useful and possible to generalize data and automatize human behavior. ML algorithms require a large amount of data to work properly and, thus, a lot of computational power is required to analyze the data. We argue that expert knowledge should be used to decrease the burden of collecting a huge amount of data for ML tasks. In this paper, we show how such kind of knowledge was used to reduce the amount of data and improve the accuracy rate for traffic light detection and recognition. Results show an improvement in the accuracy rate around 15%. The paper also proposes a TLR device prototype using both camera and processing unit of a smartphone which can be used as a driver assistance. To validate such layout prototype, a dataset was built and used to test an ML model based on adaptive background suppression filter (AdaBSF) and Support Vector Machines (SVMs). Results show 100% precision rate and recall of 65%. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle Direction of Arrival Estimation Using Augmentation of Coprime Arrays
Information 2018, 9(11), 277; https://doi.org/10.3390/info9110277
Received: 17 September 2018 / Revised: 1 November 2018 / Accepted: 6 November 2018 / Published: 9 November 2018
Viewed by 161 | PDF Full-text (855 KB) | HTML Full-text | XML Full-text
Abstract
Recently, direction of arrival (DOA) estimation premised on the sparse arrays interpolation approaches, such as co-prime arrays (CPA) and nested array, have attained extensive attention because of the effectiveness and capability of providing higher degrees of freedom (DOFs). The co-prime array interpolation approach
[...] Read more.
Recently, direction of arrival (DOA) estimation premised on the sparse arrays interpolation approaches, such as co-prime arrays (CPA) and nested array, have attained extensive attention because of the effectiveness and capability of providing higher degrees of freedom (DOFs). The co-prime array interpolation approach can detect O(MN) paths with O(M + N) sensors in the array. However, the presence of missing elements (holes) in the difference coarray has limited the number of DOFs. To implement co-prime coarray on subspace based DOA estimation algorithm namely multiple signal classification (MUSIC), a reshaping operation followed by the spatial smoothing technique have been presented in the literature. In this paper, an active coarray interpolation (ACI) is proposed to efficiently recovering the covariance matrix of the augmented coarray from the original covariance matrix of source signals with no vectorizing and spatial smoothing operation; thus, the computational complexity reduces significantly. Moreover, the numerical simulations of the proposed ACI approach offers better performance compared to its counterparts. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Linkage Effects Mining in Stock Market Based on Multi-Resolution Time Series Network
Information 2018, 9(11), 276; https://doi.org/10.3390/info9110276
Received: 27 September 2018 / Revised: 29 October 2018 / Accepted: 5 November 2018 / Published: 8 November 2018
Viewed by 164 | PDF Full-text (4826 KB) | HTML Full-text | XML Full-text
Abstract
Previous research on financial time-series data mainly focused on the analysis of market evolution and trends, ignoring its characteristics in different resolutions and stages. This paper discusses the evolution characteristics of the financial market in different resolutions, and presents a method of complex
[...] Read more.
Previous research on financial time-series data mainly focused on the analysis of market evolution and trends, ignoring its characteristics in different resolutions and stages. This paper discusses the evolution characteristics of the financial market in different resolutions, and presents a method of complex network analysis based on wavelet transform. The analysis method has proven the linkage effects of the plate sector in China’s stock market and has that found plate drift phenomenon occurred before and after the stock market crash. In addition, we also find two different evolutionary trends, namely the W-type and M-type trends. The discovery of linkage plate and drift phenomena are important and referential for enterprise investors to build portfolio investment strategy, and play an important role for policy makers in analyzing evolution characteristics of the stock market. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle g-Good-Neighbor Diagnosability of Arrangement Graphs under the PMC Model and MM* Model
Information 2018, 9(11), 275; https://doi.org/10.3390/info9110275
Received: 17 September 2018 / Revised: 27 October 2018 / Accepted: 5 November 2018 / Published: 7 November 2018
Viewed by 131 | PDF Full-text (377 KB) | HTML Full-text | XML Full-text
Abstract
Diagnosability of a multiprocessor system is an important research topic. The system and interconnection network has a underlying topology, which usually presented by a graph G=(V,E). In 2012, a measurement for fault tolerance of the graph
[...] Read more.
Diagnosability of a multiprocessor system is an important research topic. The system and interconnection network has a underlying topology, which usually presented by a graph G = ( V , E ) . In 2012, a measurement for fault tolerance of the graph was proposed by Peng et al. This measurement is called the g-good-neighbor diagnosability that restrains every fault-free node to contain at least g fault-free neighbors. Under the PMC model, to diagnose the system, two adjacent nodes in G are can perform tests on each other. Under the MM model, to diagnose the system, a node sends the same task to two of its neighbors, and then compares their responses. The MM* is a special case of the MM model and each node must test its any pair of adjacent nodes of the system. As a famous topology structure, the ( n , k ) -arrangement graph A n , k , has many good properties. In this paper, we give the g-good-neighbor diagnosability of A n , k under the PMC model and MM* model. Full article
(This article belongs to the Special Issue Graphs for Smart Communications Systems)
Figures

Figure 1

Open AccessArticle Conversion of the English-Xhosa Dictionary for Nurses to a Linguistic Linked Data Framework
Information 2018, 9(11), 274; https://doi.org/10.3390/info9110274
Received: 15 September 2018 / Revised: 1 November 2018 / Accepted: 2 November 2018 / Published: 6 November 2018
Viewed by 207 | PDF Full-text (2555 KB) | HTML Full-text | XML Full-text
Abstract
The English-Xhosa Dictionary for Nurses (EXDN) is a bilingual, unidirectional printed dictionary in the public domain, with English and isiXhosa as the language pair. By extending the digitisation efforts of EXDN from a human-readable digital object to a machine-readable state, using Resource Description
[...] Read more.
The English-Xhosa Dictionary for Nurses (EXDN) is a bilingual, unidirectional printed dictionary in the public domain, with English and isiXhosa as the language pair. By extending the digitisation efforts of EXDN from a human-readable digital object to a machine-readable state, using Resource Description Framework (RDF) as the data model, semantically interoperable structured data can be created, thus enabling EXDN’s data to be reused, aggregated and integrated with other language resources, where it can serve as a potential aid in the development of future language resources for isiXhosa, an under-resourced language in South Africa. The methodological guidelines for the construction of a Linguistic Linked Data framework (LLDF) for a lexicographic resource, as applied to EXDN, are described, where an LLDF can be defined as a framework: (1) which describes data in RDF, (2) using a model designed for the representation of linguistic information, (3) which adheres to Linked Data principles, and (4) which supports versioning, allowing for change. The result is a bidirectional lexicographic resource, previously bounded and static, now unbounded and evolving, with the ability to extend to multilingualism. Full article
(This article belongs to the Special Issue Towards the Multilingual Web of Data)
Figures

Figure 1

Open AccessReview The Impact of Code Smells on Software Bugs: A Systematic Literature Review
Information 2018, 9(11), 273; https://doi.org/10.3390/info9110273
Received: 1 October 2018 / Revised: 30 October 2018 / Accepted: 2 November 2018 / Published: 6 November 2018
Viewed by 177 | PDF Full-text (507 KB) | HTML Full-text | XML Full-text
Abstract
Context: Code smells are associated to poor design and programming style, which often degrades code quality and hampers code comprehensibility and maintainability. Goal: identify published studies that provide evidence of the influence of code smells on the occurrence of software bugs. Method: We
[...] Read more.
Context: Code smells are associated to poor design and programming style, which often degrades code quality and hampers code comprehensibility and maintainability. Goal: identify published studies that provide evidence of the influence of code smells on the occurrence of software bugs. Method: We conducted a Systematic Literature Review (SLR) to reach the stated goal. Results: The SLR selected studies from July 2007 to September 2017, which analyzed the source code of open source software projects and several code smells. Based on evidence of 16 studies covered in this SLR, we conclude that 24 code smells are more influential in the occurrence of bugs relative to the remaining smells analyzed. In contrast, three studies reported that at least 6 code smells are less influential in such occurrences. Evidence from the selected studies also point out tools, techniques, and procedures that should be applied to analyze the influence of the smells. Conclusions: To the best of our knowledge, this is the first SLR to target this goal. This study provides an up-to-date and structured understanding of the influence of code smells on the occurrence of software bugs based on findings systematically collected from a list of relevant references in the latest decade. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle Efficient Public Key Encryption with Disjunctive Keywords Search Using the New Keywords Conversion Method
Information 2018, 9(11), 272; https://doi.org/10.3390/info9110272
Received: 8 October 2018 / Revised: 28 October 2018 / Accepted: 29 October 2018 / Published: 1 November 2018
Viewed by 169 | PDF Full-text (1578 KB) | HTML Full-text | XML Full-text
Abstract
Public key encryption with disjunctive keyword search (PEDK) is a public key encryption scheme that allows disjunctive keyword search over encrypted data without decryption. This kind of scheme is crucial to cloud storage and has received a lot of attention in recent years.
[...] Read more.
Public key encryption with disjunctive keyword search (PEDK) is a public key encryption scheme that allows disjunctive keyword search over encrypted data without decryption. This kind of scheme is crucial to cloud storage and has received a lot of attention in recent years. However, the efficiency of the previous scheme is limited due to the selection of a less efficient converting method which is used to change query and index keywords into a vector space model. To address this issue, we design a novel converting approach with better performance, and give two adaptively secure PEDK schemes based on this method. The first one is built on an efficient inner product encryption scheme with less searching time, and the second one is constructed over composite order bilinear groups with higher efficiency on index and trapdoor construction. The theoretical analysis and experiment results verify that our schemes are more efficient in time and space complexity as well as more suitable for the mobile cloud setting compared with the state-of-art schemes. Full article
(This article belongs to the Section Information Theory and Methodology)
Figures

Figure 1

Open AccessArticle Remotely Monitoring Cancer-Related Fatigue Using the Smart-Phone: Results of an Observational Study
Information 2018, 9(11), 271; https://doi.org/10.3390/info9110271
Received: 30 September 2018 / Revised: 24 October 2018 / Accepted: 25 October 2018 / Published: 30 October 2018
Viewed by 239 | PDF Full-text (2543 KB) | HTML Full-text | XML Full-text
Abstract
Cancer related fatigue is a chronic disease that may persist up to 10 years after successful cancer treatment and is one of the most prevalent problems in cancer survivors. Cancer related fatigue is a complex symptom that is not yet explained completely and
[...] Read more.
Cancer related fatigue is a chronic disease that may persist up to 10 years after successful cancer treatment and is one of the most prevalent problems in cancer survivors. Cancer related fatigue is a complex symptom that is not yet explained completely and there are only a few remedies with proven evidence. Patients do not necessarily follow a treatment plan with regular follow ups. As a consequence, physicians lack of knowledge how their patients are coping with their fatigue in daily life. To overcome this knowledge gap, we developed a smartphone-based monitoring system. A developed Android app provides activity data from smartphone sensors and applies experience based sampling to collect the patients’ subjective perceptions of their fatigue and interference of fatigue with the patients’ daily life. To evaluate the monitoring system in an observational study, we recruited seven patients suffering from cancer related fatigue and tracked them over two to three weeks. We collected around 2700 h of activity data and over 500 completed questionnaires. We analysed the average completion of answering the digital questionnaires and the wearing time of the smartphone. A within-subject analysis of the perceived fatigue, its interference and measured physical activity yielded in patient specific fatigue and activity patterns depending on the time of day. Physical activity level correlated stronger with the interference of fatigue than with the fatigue itself and the variance of the acceleration correlates stronger than absolute activity values. With this work, we provide a monitoring system used for cancer related fatigue. We show with an observational study that the monitoring system is accepted by our study cohort and that it provides additional details about the perceived fatigue and physical activity to a weekly paper-based questionnaire. Full article
(This article belongs to the Special Issue e-Health Pervasive Wireless Applications and Services (e-HPWAS'17))
Figures

Figure 1

Open AccessArticle Modeling and Visualizing Smart City Mobility Business Ecosystems: Insights from a Case Study
Information 2018, 9(11), 270; https://doi.org/10.3390/info9110270
Received: 20 September 2018 / Revised: 16 October 2018 / Accepted: 22 October 2018 / Published: 29 October 2018
Viewed by 267 | PDF Full-text (3221 KB) | HTML Full-text | XML Full-text
Abstract
Smart mobility is a central issue in the recent discourse about urban development policy towards smart cities. The design of innovative and sustainable mobility infrastructures as well as public policies require cooperation and innovations between various stakeholders—businesses as well as policy makers—of the
[...] Read more.
Smart mobility is a central issue in the recent discourse about urban development policy towards smart cities. The design of innovative and sustainable mobility infrastructures as well as public policies require cooperation and innovations between various stakeholders—businesses as well as policy makers—of the business ecosystems that emerge around smart city initiatives. This poses a challenge for deploying instruments and approaches for the proactive management of such business ecosystems. In this article, we report on findings from a smart city initiative we have used as a case study to inform the development, implementation, and prototypical deployment of a visual analytic system (VAS). As results of our design science research we present an agile framework to collaboratively collect, aggregate and map data about the ecosystem. The VAS and the agile framework are intended to inform and stimulate knowledge flows between ecosystem stakeholders in order to reflect on viable business and policy strategies. Agile processes and roles to collaboratively manage and adapt business ecosystem models and visualizations are defined. We further introduce basic categories for identifying, assessing and selecting Internet data sources that provide the data for ecosystem models and we detail the ecosystem data and view models developed in our case study. Our model represents a first explication of categories for visualizing business ecosystem models in a smart city mobility context. Full article
Figures

Figure 1

Open AccessArticle Conceptualising and Modelling E-Recruitment Process for Enterprises through a Problem Oriented Approach
Information 2018, 9(11), 269; https://doi.org/10.3390/info9110269
Received: 20 September 2018 / Revised: 18 October 2018 / Accepted: 25 October 2018 / Published: 29 October 2018
Viewed by 182 | PDF Full-text (2071 KB) | HTML Full-text | XML Full-text
Abstract
Internet-led labour market has become so competitive it is forcing many organisations from different sectors to embrace e-recruitment. However, realising the value of the e-recruitment from a Requirements Engineering (RE) analysis perspective is challenging. This research was motivated by the results of a
[...] Read more.
Internet-led labour market has become so competitive it is forcing many organisations from different sectors to embrace e-recruitment. However, realising the value of the e-recruitment from a Requirements Engineering (RE) analysis perspective is challenging. This research was motivated by the results of a failed e-recruitment project conducted in military domain which was used as a case study. After reviewing the various challenges faced in that project through a number of related research domains, this research focused on two major problems: (1) the difficulty of scoping, representing, and systematically transforming recruitment problem knowledge towards e-recruitment solution specification; and (2) the difficulty of documenting e-recruitment best practices for reuse purposes in an enterprise recruitment environment. In this paper, a Problem-Oriented Conceptual Model (POCM) with a complementary Ontology for Recruitment Problem Definition (Onto-RPD) is proposed to contextualise the various recruitment problem viewpoints from an enterprise perspective, and to elaborate those problem viewpoints towards a comprehensive recruitment problem definition. POCM and Onto-RPD are developed incrementally using action-research conducted on three real case studies: (1) Secureland Army Enlistment; (2) British Army Regular Enlistment; and (3) UK Undergraduate Universities and Colleges Admissions Service (UCAS). They are later evaluated in a focus group study against a set of criteria. The study shows that POCM and Onto-RPD provide a strong foundation for representing and understanding the e-recruitment problems from different perspectives. Full article
Figures

Figure 1

Open AccessArticle Hybrid Metaheuristics to the Automatic Selection of Features and Members of Classifier Ensembles
Information 2018, 9(11), 268; https://doi.org/10.3390/info9110268
Received: 7 September 2018 / Revised: 17 October 2018 / Accepted: 20 October 2018 / Published: 26 October 2018
Viewed by 198 | PDF Full-text (330 KB) | HTML Full-text | XML Full-text
Abstract
Metaheuristic algorithms have been applied to a wide range of global optimization problems. Basically, these techniques can be applied to problems in which a good solution must be found, providing imperfect or incomplete knowledge about the optimal solution. However, the concept of combining
[...] Read more.
Metaheuristic algorithms have been applied to a wide range of global optimization problems. Basically, these techniques can be applied to problems in which a good solution must be found, providing imperfect or incomplete knowledge about the optimal solution. However, the concept of combining metaheuristics in an efficient way has emerged recently, in a field called hybridization of metaheuristics or, simply, hybrid metaheuristics. As a result of this, hybrid metaheuristics can be successfully applied in different optimization problems. In this paper, two hybrid metaheuristics, MAMH (Multiagent Metaheuristic Hybridization) and MAGMA (Multiagent Metaheuristic Architecture), are adapted to be applied in the automatic design of ensemble systems, in both mono- and multi-objective versions. To validate the feasibility of these hybrid techniques, we conducted an empirical investigation, performing a comparative analysis between them and traditional metaheuristics as well as existing existing ensemble generation methods. Our findings demonstrate a competitive performance of both techniques, in which a hybrid technique provided the lowest error rate for most of the analyzed objective functions. Full article
(This article belongs to the Special Issue Multi-objective Evolutionary Feature Selection)
Figures

Figure 1

Open AccessArticle Motivation Perspectives on Opening up Municipality Data: Does Municipality Size Matter?
Information 2018, 9(11), 267; https://doi.org/10.3390/info9110267
Received: 2 October 2018 / Revised: 17 October 2018 / Accepted: 18 October 2018 / Published: 25 October 2018
Viewed by 222 | PDF Full-text (1077 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
National governments often expect municipalities to develop toward open cities and be equally motivated to open up municipal data, yet municipalities have different characteristics influencing their motivations. This paper aims to reveal how municipality size influences municipalities’ motivation perspectives on opening up municipality
[...] Read more.
National governments often expect municipalities to develop toward open cities and be equally motivated to open up municipal data, yet municipalities have different characteristics influencing their motivations. This paper aims to reveal how municipality size influences municipalities’ motivation perspectives on opening up municipality data. To this end, Q-methodology is used, which is a method that is suited to objectify people’s frames of mind on a particular topic. By applying this method to 37 municipalities in the Netherlands, we elicited the motivation perspectives of three main groups of municipalities: (1) advocating municipalities, (2) careful municipalities, and (3) conservative municipalities. We found that advocating municipalities are mainly large-sized municipalities (>65,000 inhabitants) and a few small-sized municipalities (<35,000 inhabitants). Careful municipalities concern municipalities of all sizes (small, medium, and large). The conservative municipality perspective is more common among smaller-sized municipalities. Our findings do not support the statement “the smaller the municipality, the less motivated it is to open up its data”. However, the type and amount of municipality resources do influence motivations to share data or not. We provide recommendations for how open data policy makers on the national level need to support the three groups of municipalities and municipalities of different sizes in different ways to stimulate the provision of municipal data to the public as much as possible. Moreover, if national governments can identify which municipalities adhere to which motivation perspective, they can then develop more targeted open data policies that meet the requirements of the municipalities that adhere to each perspective. This should result in more open data value creation. Full article
Figures

Figure 1

Open AccessFeature PaperArticle ImplicPBDD: A New Approach to Extract Proper Implications Set from High-Dimension Formal Contexts Using a Binary Decision Diagram
Information 2018, 9(11), 266; https://doi.org/10.3390/info9110266
Received: 20 September 2018 / Revised: 12 October 2018 / Accepted: 22 October 2018 / Published: 25 October 2018
Viewed by 194 | PDF Full-text (1026 KB) | HTML Full-text | XML Full-text
Abstract
Formal concept analysis (FCA) is largely applied in different areas. However, in some FCA applications the volume of information that needs to be processed can become unfeasible. Thus, the demand for new approaches and algorithms that enable processing large amounts of information is
[...] Read more.
Formal concept analysis (FCA) is largely applied in different areas. However, in some FCA applications the volume of information that needs to be processed can become unfeasible. Thus, the demand for new approaches and algorithms that enable processing large amounts of information is increasing substantially. This article presents a new algorithm for extracting proper implications from high-dimensional contexts. The proposed algorithm, called ImplicPBDD, was based on the PropIm algorithm, and uses a data structure called binary decision diagram (BDD) to simplify the representation of the formal context and enhance the extraction of proper implications. In order to analyze the performance of the ImplicPBDD algorithm, we performed tests using synthetic contexts varying the number of objects, attributes and context density. The experiments show that ImplicPBDD has a better performance—up to 80% faster—than its original algorithm, regardless of the number of attributes, objects and densities. Full article
Figures

Figure 1

Open AccessArticle On the Single-Parity Locally Repairable Codes with Multiple Repairable Groups
Information 2018, 9(11), 265; https://doi.org/10.3390/info9110265
Received: 25 September 2018 / Revised: 17 October 2018 / Accepted: 20 October 2018 / Published: 24 October 2018
Viewed by 219 | PDF Full-text (497 KB) | HTML Full-text | XML Full-text
Abstract
Locally repairable codes (LRCs) are a new family of erasure codes used in distributed storage systems which have attracted a great deal of interest in recent years. For an [n,k,d] linear code, if a code symbol can
[...] Read more.
Locally repairable codes (LRCs) are a new family of erasure codes used in distributed storage systems which have attracted a great deal of interest in recent years. For an [ n , k , d ] linear code, if a code symbol can be repaired by t disjoint groups of other code symbols, where each group contains at most r code symbols, it is said to have availability- ( r , t ) . Single-parity LRCs are LRCs with a constraint that each repairable group contains exactly one parity symbol. For an [ n , k , d ] single-parity LRC with availability- ( r , t ) for the information symbols (single-parity LRCs), the minimum distance satisfies d n k k t / r + t + 1 . In this paper, we focus on the study of single-parity LRCs with availability- ( r , t ) for information symbols. Based on the standard form of generator matrices, we present a novel characterization of single-parity LRCs with availability t 1 . Then, a simple and straightforward proof for the Singleton-type bound is given based on the new characterization. Some necessary conditions for optimal single-parity LRCs with availability t 1 are obtained, which might provide some guidelines for optimal coding constructions. Full article
(This article belongs to the Section Information Theory and Methodology)
Figures

Figure 1

Open AccessArticle Challenges and Opportunities of Named Data Networking in Vehicle-To-Everything Communication: A Review
Information 2018, 9(11), 264; https://doi.org/10.3390/info9110264
Received: 31 August 2018 / Revised: 28 September 2018 / Accepted: 19 October 2018 / Published: 23 October 2018
Viewed by 228 | PDF Full-text (1040 KB) | HTML Full-text | XML Full-text
Abstract
Many car manufacturers have recently proposed to release autonomous self-driving cars within the next few years. Information gathered by sensors (e.g., cameras, GPS, lidar, radar, ultrasonic) enable cars to drive autonomously on roads. However, in urban or high-speed traffic scenarios the information gathered
[...] Read more.
Many car manufacturers have recently proposed to release autonomous self-driving cars within the next few years. Information gathered by sensors (e.g., cameras, GPS, lidar, radar, ultrasonic) enable cars to drive autonomously on roads. However, in urban or high-speed traffic scenarios the information gathered by mounted sensors may not be sufficient to guarantee a smooth and safe traffic flow. Thus, information received from infrastructure and other cars or vehicles on the road is vital. Key aspects in Vehicle-To-Everything (V2X) communication are security, authenticity, and integrity which are inherently provided by Information Centric Networking (ICN). In this paper, we identify advantages and drawbacks of ICN for V2X communication. We specifically review forwarding, caching, as well as simulation aspects for V2X communication with a focus on ICN. Furthermore, we investigate existing solutions for V2X and discuss their applicability. Based on these investigations, we suggest directions for further work in context of ICN (in particular Named Data Networking) to enable V2X communication providing a secure and efficient transport platform. Full article
(This article belongs to the Special Issue Information-Centric Networking)
Figures

Figure 1

Open AccessArticle Context-Aware Data Dissemination for ICN-Based Vehicular Ad Hoc Networks
Information 2018, 9(11), 263; https://doi.org/10.3390/info9110263
Received: 31 August 2018 / Revised: 12 October 2018 / Accepted: 17 October 2018 / Published: 23 October 2018
Viewed by 232 | PDF Full-text (6700 KB) | HTML Full-text | XML Full-text
Abstract
Information-centric networking (ICN) technology matches many major requirements of vehicular ad hoc networks (VANETs) in terms of its connectionless networking paradigm accordant with the dynamic environments of VANETs and is increasingly being applied to VANETs. However, wireless transmissions of packets in VANETs using
[...] Read more.
Information-centric networking (ICN) technology matches many major requirements of vehicular ad hoc networks (VANETs) in terms of its connectionless networking paradigm accordant with the dynamic environments of VANETs and is increasingly being applied to VANETs. However, wireless transmissions of packets in VANETs using ICN mechanisms can lead to broadcast storms and channel contention, severely affecting the performance of data dissemination. At the same time, frequent changes of topology due to driving at high speeds and environmental obstacles can also lead to link interruptions when too few vehicles are involved in data forwarding. Hence, balancing the number of forwarding vehicular nodes and the number of copies of packets that are forwarded is essential for improving the performance of data dissemination in information-centric networking for vehicular ad-hoc networks. In this paper, we propose a context-aware packet-forwarding mechanism for ICN-based VANETs. The relative geographical position of vehicles, the density and relative distribution of vehicles, and the priority of content are considered during the packet forwarding. Simulation results show that the proposed mechanism can improve the performance of data dissemination in ICN-based VANET in terms of a successful data delivery ratio, packet loss rate, bandwidth usage, data response time, and traversed hops. Full article
(This article belongs to the Special Issue Information-Centric Networking)
Figures

Figure 1

Open AccessArticle Improving Collaborative Filtering-Based Image Recommendation through Use of Eye Gaze Tracking
Information 2018, 9(11), 262; https://doi.org/10.3390/info9110262
Received: 15 September 2018 / Revised: 10 October 2018 / Accepted: 19 October 2018 / Published: 23 October 2018
Viewed by 158 | PDF Full-text (1638 KB) | HTML Full-text | XML Full-text
Abstract
Due to the overwhelming variety of products and services currently available on electronic commerce sites, the consumer finds it difficult to encounter products of preference. It is common that product preference be influenced by the visual appearance of the image associated with the
[...] Read more.
Due to the overwhelming variety of products and services currently available on electronic commerce sites, the consumer finds it difficult to encounter products of preference. It is common that product preference be influenced by the visual appearance of the image associated with the product. In this context, Recommendation Systems for products that are associated with Images (IRS) become vitally important in aiding consumers to find those products considered as pleasing or useful. In general, these IRS use the Collaborative Filtering technique that is based on the behaviour passed on by users. One of the principal challenges found with this technique is the need for the user to supply information concerning their preference. Therefore, methods for obtaining implicit information are desirable. In this work, the author proposes an investigation to discover to which extent information concerning user visual attention can aid in producing a more precise IRS. This work proposes therefore a new approach, which combines the preferences passed on from the user, by means of ratings and visual attention data. The experimental results show that our approach exceeds that of the state of the art. Full article
(This article belongs to the Special Issue Modern Recommender Systems: Approaches, Challenges and Applications)
Figures

Figure 1

Back to Top