Next Issue
Volume 9, December
Previous Issue
Volume 9, October
 
 

Information, Volume 9, Issue 11 (November 2018) – 31 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 3140 KiB  
Article
Knowledge and Perceptions of Open Science among Researchers—A Case Study for Colombia
by Clara Inés Pardo Martínez and Alexander Cotte Poveda
Information 2018, 9(11), 292; https://doi.org/10.3390/info9110292 - 21 Nov 2018
Cited by 13 | Viewed by 4989
Abstract
Open science can provide researchers diverse opportunities to collaborate, disseminate their research results, generate important impacts in the scientific community, and engage in effective and efficient science for the benefit of society. This study seeks to analyse and evaluate researchers’ knowledge of open [...] Read more.
Open science can provide researchers diverse opportunities to collaborate, disseminate their research results, generate important impacts in the scientific community, and engage in effective and efficient science for the benefit of society. This study seeks to analyse and evaluate researchers’ knowledge of open science in Colombia using a survey to determine adequate instruments with which to improve research in the framework of open science. The aim of the study is to determine researchers’ current awareness of open science by considering demographic characteristics to analyse their attitudes, values, and information habits as well as the levels of institutionalism and social appropriation of open science. A representative sample of Colombian researchers was selected from the National Research System. An anonymous online survey consisting of 34 questions was sent to all professors and researchers at Colombian universities and research institutes. Sampling was random and stratified, which allowed for a representative sample of different categories of researchers, and principal component analysis (PCA) was used for the sample design. A total of 1042 responses were received, with a 95% confidence level and a margin of error of 3%. The majority of respondents knew about open science, especially in relation to open science tools (software, repositories, and networks) and open data. Researchers consider open science to be positively impacted by factors such as the rise of digital technologies, the search for new forms of collaboration, the greater availability of open data and information, and public demand for better and more effective science. In contrast, a lack of resources to develop research activities within the open science approach and the limited integration between traditional and open science are identified as the most important barriers to its use in research. These results are important for building adequate open science policy in Colombia. Full article
Show Figures

Figure 1

14 pages, 3281 KiB  
Article
A Ranking Method for User Recommendation Based on Fuzzy Preference Relations in the Nature Reserve of Dangshan Pear Germplasm Resources
by Ali Mohsin, Qiong Shen, Xinyu Wang and Xiaoming Zhang
Information 2018, 9(11), 291; https://doi.org/10.3390/info9110291 - 19 Nov 2018
Cited by 2 | Viewed by 2485
Abstract
Precision orchard management is an important avenue of investigation in agricultural technology and is an urgently needed part of information development in the fruit industry. Precision management based on a precision agricultural technology system involves many factors and results in users being unable [...] Read more.
Precision orchard management is an important avenue of investigation in agricultural technology and is an urgently needed part of information development in the fruit industry. Precision management based on a precision agricultural technology system involves many factors and results in users being unable to make accurate judgments. To improve user decision-making accuracy and the level of precision management, we used user preferences to achieve the recommendation function. In this paper, a ranking method based on fuzzy preference relations for user recommendation is proposed. We selected the Nature Reserve of Dangshan Pear Germplasm Resources as the research location and invited experts and representatives of different roles (government, farmers, and tourists) to give the fuzzy preference relation coefficients. Then, an optimization model was proposed based on the fuzzy preference relation. We solved the proposed model by constructing a Lagrangian function, and obtained the ranking values of the user preference recommendation function. Finally, we ranked the order of the given roles and implemented the fuzzy preference recommendation. The experimental results show that the proposed method is effective and can be conveniently applied to other problems related to user preference relations. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

16 pages, 591 KiB  
Article
Annotating a Low-Resource Language with LLOD Technology: Sumerian Morphology and Syntax
by Christian Chiarcos, Ilya Khait, Émilie Pagé-Perron, Niko Schenk, Jayanth, Christian Fäth, Julius Steuer, William Mcgrath and Jinyan Wang
Information 2018, 9(11), 290; https://doi.org/10.3390/info9110290 - 19 Nov 2018
Cited by 13 | Viewed by 6149
Abstract
This paper describes work on the morphological and syntactic annotation of Sumerian cuneiform as a model for low resource languages in general. Cuneiform texts are invaluable sources for the study of history, languages, economy, and cultures of Ancient Mesopotamia and its surrounding regions. [...] Read more.
This paper describes work on the morphological and syntactic annotation of Sumerian cuneiform as a model for low resource languages in general. Cuneiform texts are invaluable sources for the study of history, languages, economy, and cultures of Ancient Mesopotamia and its surrounding regions. Assyriology, the discipline dedicated to their study, has vast research potential, but lacks the modern means for computational processing and analysis. Our project, Machine Translation and Automated Analysis of Cuneiform Languages, aims to fill this gap by bringing together corpus data, lexical data, linguistic annotations and object metadata. The project’s main goal is to build a pipeline for machine translation and annotation of Sumerian Ur III administrative texts. The rich and structured data is then to be made accessible in the form of (Linguistic) Linked Open Data (LLOD), which should open them to a larger research community. Our contribution is two-fold: in terms of language technology, our work represents the first attempt to develop an integrative infrastructure for the annotation of morphology and syntax on the basis of RDF technologies and LLOD resources. With respect to Assyriology, we work towards producing the first syntactically annotated corpus of Sumerian. Full article
(This article belongs to the Special Issue Towards the Multilingual Web of Data)
Show Figures

Figure 1

16 pages, 3021 KiB  
Article
Measuring Bikeshare Access/Egress Transferring Distance and Catchment Area around Metro Stations from Smartcard Data
by Xinwei Ma, Yuchuan Jin and Mingjia He
Information 2018, 9(11), 289; https://doi.org/10.3390/info9110289 - 19 Nov 2018
Cited by 8 | Viewed by 3146
Abstract
Metro–bikeshare integration is considered a green and efficient travel model. To better develop such integration, it is necessary to monitor and analyze metro–bikeshare transfer characteristics. This paper measures access and egress transferring distances and catchment areas based on smartcard data. A cubic regression [...] Read more.
Metro–bikeshare integration is considered a green and efficient travel model. To better develop such integration, it is necessary to monitor and analyze metro–bikeshare transfer characteristics. This paper measures access and egress transferring distances and catchment areas based on smartcard data. A cubic regression model is conducted for the exploration of the 85th access and egress network-based transferring distance around metro stations. Then, the independent samples t-test and one-way analysis of variance (ANOVA) are used to explore access and egress transfer characteristics in demographic groups and spatial and temporal dimension. Additionally, the catchment area is delineated by applying both the network-based distance method and Euclidean distance method. The result reveals that males outcompete females both in access and egress distances and urban dwellers ride a shorter distance than those in suburban areas. Access and egress distances are both shorter in morning peak hours than those in evening peak hours and access distance on weekdays is longer than that on weekends. In addition, network-based catchment area accounts for over 90% of Euclidean catchment area in urban areas, while most of the ratios are less than 85% in suburban. The paper uses data from Nanjing, China as a case study. This study serves as a scientific basis for policy makers and bikeshare companies to improve metro–bikeshare integration. Full article
Show Figures

Figure 1

18 pages, 894 KiB  
Article
A Hybrid Swarm Intelligent Neural Network Model for Customer Churn Prediction and Identifying the Influencing Factors
by Hossam Faris
Information 2018, 9(11), 288; https://doi.org/10.3390/info9110288 - 17 Nov 2018
Cited by 22 | Viewed by 4487
Abstract
Customer churn is one of the most challenging problems for telecommunication companies. In fact, this is because customers are considered as the real asset for the companies. Therefore, more companies are increasing their investments in developing practical solutions that aim at predicting customer [...] Read more.
Customer churn is one of the most challenging problems for telecommunication companies. In fact, this is because customers are considered as the real asset for the companies. Therefore, more companies are increasing their investments in developing practical solutions that aim at predicting customer churn before it happens. Identifying which customer is about to churn will significantly help the companies in providing solutions to keep their customers and optimize their marketing campaigns. In this work, an intelligent hybrid model based on Particle Swarm Optimization and Feedforward neural network is proposed for churn prediction. PSO is used to tune the weights of the input features and optimize the structure of the neural network simultaneously to increase the prediction power. In addition, the proposed model handles the imbalanced class distribution of the data using an advanced oversampling technique. Evaluation results show that the proposed model can significantly improve the coverage rate of churn customers in comparison with other state-of-the-art classifiers. Moreover, the model has high interpretability, where the assigned feature weights can give an indicator about the importance of their corresponding features in the classification process. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 10360 KiB  
Article
Quantifying Bicycle Network Connectivity in Lisbon Using Open Data
by Lorena Abad and Lucas Van der Meer
Information 2018, 9(11), 287; https://doi.org/10.3390/info9110287 - 17 Nov 2018
Cited by 4 | Viewed by 5214
Abstract
Stimulating non-motorized transport has been a key point on sustainable mobility agendas for cities around the world. Lisbon is no exception, as it invests in the implementation of new bike infrastructure. Quantifying the connectivity of such a bicycle network can help evaluate its [...] Read more.
Stimulating non-motorized transport has been a key point on sustainable mobility agendas for cities around the world. Lisbon is no exception, as it invests in the implementation of new bike infrastructure. Quantifying the connectivity of such a bicycle network can help evaluate its current state and highlight specific challenges that should be addressed. Therefore, the aim of this study is to develop an exploratory score that allows a quantification of the bicycle network connectivity in Lisbon based on open data. For each part of the city, a score was computed based on how many common destinations (e.g., schools, universities, supermarkets, hospitals) were located within an acceptable biking distance when using only bicycle lanes and roads with low traffic stress for cyclists. Taking a weighted average of these scores resulted in an overall score for the city of Lisbon of only 8.6 out of 100 points. This shows, at a glance, that the city still has a long way to go before achieving their objectives regarding bicycle use in the city. Full article
Show Figures

Figure 1

12 pages, 1004 KiB  
Article
Decentralized Transaction Mechanism Based on Smart Contract in Distributed Data Storage
by Yonggen Gu, Dingding Hou, Xiaohong Wu, Jie Tao and Yanqiong Zhang
Information 2018, 9(11), 286; https://doi.org/10.3390/info9110286 - 17 Nov 2018
Cited by 17 | Viewed by 4155
Abstract
Distributed data storage has received more attention due to its advantages in reliability, availability and scalability, and it brings both opportunities and challenges for distributed data storage transaction. The traditional transaction system of storage resources, which generally runs in a centralized mode, results [...] Read more.
Distributed data storage has received more attention due to its advantages in reliability, availability and scalability, and it brings both opportunities and challenges for distributed data storage transaction. The traditional transaction system of storage resources, which generally runs in a centralized mode, results in high cost, vendor lock-in and single point failure risk. To overcome the above shortcomings, considering the storage policy with erasure coding, in this paper we propose a decentralized transaction method for cloud storage based on a smart contract, which takes into account the resource cost for distributed data storage. First, to guarantee the availability and decrease the storing cost, a reverse Vickrey-Clarke-Groves (VCG) based auction mechanism is proposed for storage resource selection and transaction. Then we deploy and implement the proposed mechanism by designing a corresponding smart contract. Especially, we address the problem of how to implement a VCG-like mechanism in a blockchain environment. Based on the private chain of Ethereum, we make the simulation for the proposed storage transaction method. The results of simulation show that the proposed transaction model can realize competitive trading of storage resources and ensure the safe and economic operation of resource trading. Full article
(This article belongs to the Special Issue BlockChain and Smart Contracts)
Show Figures

Figure 1

54 pages, 11227 KiB  
Article
Visualising Business Data: A Survey
by Richard C. Roberts and Robert S. Laramee
Information 2018, 9(11), 285; https://doi.org/10.3390/info9110285 - 17 Nov 2018
Cited by 20 | Viewed by 10542
Abstract
A rapidly increasing number of businesses rely on visualisation solutions for their data management challenges. This demand stems from an industry-wide shift towards data-driven approaches to decision making and problem-solving. However, there is an overwhelming mass of heterogeneous data collected as a result. [...] Read more.
A rapidly increasing number of businesses rely on visualisation solutions for their data management challenges. This demand stems from an industry-wide shift towards data-driven approaches to decision making and problem-solving. However, there is an overwhelming mass of heterogeneous data collected as a result. The analysis of these data become a critical and challenging part of the business process. Employing visual analysis increases data comprehension thus enabling a wider range of users to interpret the underlying behaviour, as opposed to skilled but expensive data analysts. Widening the reach to an audience with a broader range of backgrounds creates new opportunities for decision making, problem-solving, trend identification, and creative thinking. In this survey, we identify trends in business visualisation and visual analytic literature where visualisation is used to address data challenges and identify areas in which industries use visual design to develop their understanding of the business environment. Our novel classification of literature includes the topics of businesses intelligence, business ecosystem, customer-centric. This survey provides a valuable overview and insight into the business visualisation literature with a novel classification that highlights both mature and less developed research directions. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

22 pages, 3444 KiB  
Article
Furthest-Pair-Based Decision Trees: Experimental Results on Big Data Classification
by Ahmad B. A. Hassanat
Information 2018, 9(11), 284; https://doi.org/10.3390/info9110284 - 17 Nov 2018
Cited by 25 | Viewed by 3847
Abstract
Big Data classification has recently received a great deal of attention due to the main properties of Big Data, which are volume, variety, and velocity. The furthest-pair-based binary search tree (FPBST) shows a great potential for Big Data classification. This work attempts to [...] Read more.
Big Data classification has recently received a great deal of attention due to the main properties of Big Data, which are volume, variety, and velocity. The furthest-pair-based binary search tree (FPBST) shows a great potential for Big Data classification. This work attempts to improve the performance the FPBST in terms of computation time, space consumed and accuracy. The major enhancement of the FPBST includes converting the resultant BST to a decision tree, in order to remove the need for the slow K-nearest neighbors (KNN), and to obtain a smaller tree, which is useful for memory usage, speeding both training and testing phases and increasing the classification accuracy. The proposed decision trees are based on calculating the probabilities of each class at each node using various methods; these probabilities are then used by the testing phase to classify an unseen example. The experimental results on some (small, intermediate and big) machine learning datasets show the efficiency of the proposed methods, in terms of space, speed and accuracy compared to the FPBST, which shows a great potential for further enhancements of the proposed methods to be used in practice. Full article
Show Figures

Figure 1

18 pages, 6084 KiB  
Article
Triadic Structures in Interpersonal Communication
by Mark Burgin
Information 2018, 9(11), 283; https://doi.org/10.3390/info9110283 - 16 Nov 2018
Cited by 4 | Viewed by 3653
Abstract
Communication, which is information exchange between systems, is one of the basic information processes. To better understand communication and develop more efficient communication tools, it is important to have adequate and concise, static and dynamic, structured models of communication. The principal goal of [...] Read more.
Communication, which is information exchange between systems, is one of the basic information processes. To better understand communication and develop more efficient communication tools, it is important to have adequate and concise, static and dynamic, structured models of communication. The principal goal of this paper is explication of the communication structures, formation of their adequate mathematical models and description of their dynamic interaction. Exploring communication in the context of structures and structural dynamics, we utilize the most fundamental structure in mathematics, nature and cognition, which is called a named set or a fundamental triad because this structure has been useful in a variety of areas including networks and networking, physics, information theory, mathematics, logic, database theory and practice, artificial intelligence, mathematical linguistics, epistemology and methodology of science, to mention but a few. In this paper, we apply the theory of named sets (fundamental triads) for description and analysis of interpersonal communication. As a result, we explicate and describe of various structural regularities of communication, many of which are triadic by their nature allowing more advanced and efficient organization of interpersonal communication. Full article
20 pages, 646 KiB  
Article
Neighborhood Attribute Reduction: A Multicriterion Strategy Based on Sample Selection
by Yuan Gao, Xiangjian Chen, Xibei Yang and Pingxin Wang
Information 2018, 9(11), 282; https://doi.org/10.3390/info9110282 - 16 Nov 2018
Cited by 7 | Viewed by 2213
Abstract
In the rough-set field, the objective of attribute reduction is to regulate the variations of measures by reducing redundant data attributes. However, most of the previous concepts of attribute reductions were designed by one and only one measure, which indicates that the obtained [...] Read more.
In the rough-set field, the objective of attribute reduction is to regulate the variations of measures by reducing redundant data attributes. However, most of the previous concepts of attribute reductions were designed by one and only one measure, which indicates that the obtained reduct may fail to meet the constraints given by other measures. In addition, the widely used heuristic algorithm for computing a reduct requires to scan all samples in data, and then time consumption may be too high to be accepted if the size of the data is too large. To alleviate these problems, a framework of attribute reduction based on multiple criteria with sample selection is proposed in this paper. Firstly, cluster centroids are derived from data, and then samples that are far away from the cluster centroids can be selected. This step completes the process of sample selection for reducing data size. Secondly, multiple criteria-based attribute reduction was designed, and the heuristic algorithm was used over the selected samples for computing reduct in terms of multiple criteria. Finally, the experimental results over 12 UCI datasets show that the reducts obtained by our framework not only satisfy the constraints given by multiple criteria, but also provide better classification performance and less time consumption. Full article
Show Figures

Figure 1

20 pages, 1022 KiB  
Article
Alignment: A Hybrid, Interactive and Collaborative Ontology and Entity Matching Service
by Sotirios Karampatakis, Charalampos Bratsas, Ondřej Zamazal, Panagiotis Marios Filippidis and Ioannis Antoniou
Information 2018, 9(11), 281; https://doi.org/10.3390/info9110281 - 15 Nov 2018
Cited by 6 | Viewed by 3789
Abstract
Ontology matching is an essential problem in the world of Semantic Web and other distributed, open world applications. Heterogeneity occurs as a result of diversity in tools, knowledge, habits, language, interests and usually the level of detail. Automated applications have been developed, implementing [...] Read more.
Ontology matching is an essential problem in the world of Semantic Web and other distributed, open world applications. Heterogeneity occurs as a result of diversity in tools, knowledge, habits, language, interests and usually the level of detail. Automated applications have been developed, implementing diverse aligning techniques and similarity measures, with outstanding performance. However, there are use cases where automated linking fails and there must be involvement of the human factor in order to create, or not create, a link. In this paper we present Alignment, a collaborative, system aided, interactive ontology matching platform. Alignment offers a user-friendly environment for matching two ontologies with the aid of configurable similarity algorithms. Full article
(This article belongs to the Special Issue Knowledge Engineering and Semantic Web)
Show Figures

Figure 1

18 pages, 1962 KiB  
Article
Predicting Cyber-Events by Leveraging Hacker Sentiment
by Ashok Deb, Kristina Lerman and Emilio Ferrara
Information 2018, 9(11), 280; https://doi.org/10.3390/info9110280 - 15 Nov 2018
Cited by 30 | Viewed by 6075
Abstract
Recent high-profile cyber-attacks exemplify why organizations need better cyber-defenses. Cyber-threats are hard to accurately predict because attackers usually try to mask their traces. However, they often discuss exploits and techniques on hacking forums. The community behavior of the hackers may provide insights into [...] Read more.
Recent high-profile cyber-attacks exemplify why organizations need better cyber-defenses. Cyber-threats are hard to accurately predict because attackers usually try to mask their traces. However, they often discuss exploits and techniques on hacking forums. The community behavior of the hackers may provide insights into the groups’ collective malicious activity. We propose a novel approach to predict cyber-events using sentiment analysis. We test our approach using cyber-attack data from two major business organizations. We consider three types of events: malicious software installation, malicious-destination visits, and malicious emails that surmounted the target organizations’ defenses. We construct predictive signals by applying sentiment analysis to hacker forum posts to better understand hacker behavior. We analyze over 400 K posts written between January 2016 and January 2018 on over 100 hacking forums both on the surface and dark web. We find that some forums have significantly more predictive power than others. Sentiment-based models that leverage specific forums can complement state-of-the-art time-series models on forecasting cyber-attacks weeks ahead of the events. Full article
(This article belongs to the Special Issue Darkweb Cyber Threat Intelligence Mining)
Show Figures

Figure 1

28 pages, 2490 KiB  
Article
Smart Process Optimization and Adaptive Execution with Semantic Services in Cloud Manufacturing
by Luca Mazzola, Philipp Waibel, Patrick Kaphanke and Matthias Klusch
Information 2018, 9(11), 279; https://doi.org/10.3390/info9110279 - 13 Nov 2018
Cited by 15 | Viewed by 5032
Abstract
A new requirement for the manufacturing companies in Industry 4.0 is to be flexible with respect to changes in demands, requiring them to react rapidly and efficiently on the production capacities. Together with the trend to use Service-Oriented Architectures (SOA), this requirement induces [...] Read more.
A new requirement for the manufacturing companies in Industry 4.0 is to be flexible with respect to changes in demands, requiring them to react rapidly and efficiently on the production capacities. Together with the trend to use Service-Oriented Architectures (SOA), this requirement induces a need for agile collaboration among supply chain partners, but also between different divisions or branches of the same company. In order to address this collaboration challenge, we propose a novel pragmatic approach for the process analysis, implementation and execution. This is achieved through sets of semantic annotations of business process models encoded into BPMN 2.0 extensions. Building blocks for such manufacturing processes are the individual available services, which are also semantically annotated according to the Everything-as-a-Service (XaaS) principles and stored into a common marketplace. The optimization of such manufacturing processes combines pattern-based semantic composition of services with their non-functional aspects. This is achieved by means of Quality-of-Service (QoS)-based Constraint Optimization Problem (COP) solving, resulting in an automatic implementation of service-based manufacturing processes. The produced solution is mapped back to the BPMN 2.0 standard formalism by means of the introduced extension elements, fully detailing the enactable optimal process service plan produced. This approach allows enacting a process instance, using just-in-time service leasing, allocation of resources and dynamic replanning in the case of failures. This proposition provides the best compromise between external visibility, control and flexibility. In this way, it provides an optimal approach for business process models’ implementation, with a full service-oriented taste, by implementing user-defined QoS metrics, just-in-time execution and basic dynamic repairing capabilities. This paper presents the described approach and the technical architecture and depicts one initial industrial application in the manufacturing domain of aluminum forging for bicycle hull body forming, where the advantages stemming from the main capabilities of this approach are sketched. Full article
(This article belongs to the Special Issue Knowledge Engineering and Semantic Web)
Show Figures

Figure 1

15 pages, 3002 KiB  
Article
Prototyping a Traffic Light Recognition Device with Expert Knowledge
by Thiago Almeida, Hendrik Macedo, Leonardo Matos and Nathanael Vasconcelos
Information 2018, 9(11), 278; https://doi.org/10.3390/info9110278 - 13 Nov 2018
Cited by 2 | Viewed by 3656
Abstract
Traffic light detection and recognition (TLR) research has grown every year. In addition, Machine Learning (ML) has been largely used not only in traffic light research but in every field where it is useful and possible to generalize data and automatize human behavior. [...] Read more.
Traffic light detection and recognition (TLR) research has grown every year. In addition, Machine Learning (ML) has been largely used not only in traffic light research but in every field where it is useful and possible to generalize data and automatize human behavior. ML algorithms require a large amount of data to work properly and, thus, a lot of computational power is required to analyze the data. We argue that expert knowledge should be used to decrease the burden of collecting a huge amount of data for ML tasks. In this paper, we show how such kind of knowledge was used to reduce the amount of data and improve the accuracy rate for traffic light detection and recognition. Results show an improvement in the accuracy rate around 15%. The paper also proposes a TLR device prototype using both camera and processing unit of a smartphone which can be used as a driver assistance. To validate such layout prototype, a dataset was built and used to test an ML model based on adaptive background suppression filter (AdaBSF) and Support Vector Machines (SVMs). Results show 100% precision rate and recall of 65%. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Show Figures

Figure 1

12 pages, 855 KiB  
Article
Direction of Arrival Estimation Using Augmentation of Coprime Arrays
by Tehseen Ul Hassan, Fei Gao, Babur Jalal and Sheeraz Arif
Information 2018, 9(11), 277; https://doi.org/10.3390/info9110277 - 09 Nov 2018
Cited by 1 | Viewed by 2676
Abstract
Recently, direction of arrival (DOA) estimation premised on the sparse arrays interpolation approaches, such as co-prime arrays (CPA) and nested array, have attained extensive attention because of the effectiveness and capability of providing higher degrees of freedom (DOFs). The co-prime array interpolation approach [...] Read more.
Recently, direction of arrival (DOA) estimation premised on the sparse arrays interpolation approaches, such as co-prime arrays (CPA) and nested array, have attained extensive attention because of the effectiveness and capability of providing higher degrees of freedom (DOFs). The co-prime array interpolation approach can detect O(MN) paths with O(M + N) sensors in the array. However, the presence of missing elements (holes) in the difference coarray has limited the number of DOFs. To implement co-prime coarray on subspace based DOA estimation algorithm namely multiple signal classification (MUSIC), a reshaping operation followed by the spatial smoothing technique have been presented in the literature. In this paper, an active coarray interpolation (ACI) is proposed to efficiently recovering the covariance matrix of the augmented coarray from the original covariance matrix of source signals with no vectorizing and spatial smoothing operation; thus, the computational complexity reduces significantly. Moreover, the numerical simulations of the proposed ACI approach offers better performance compared to its counterparts. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

14 pages, 4826 KiB  
Article
Linkage Effects Mining in Stock Market Based on Multi-Resolution Time Series Network
by Lingyu Xu, Huan Xu, Jie Yu and Lei Wang
Information 2018, 9(11), 276; https://doi.org/10.3390/info9110276 - 08 Nov 2018
Cited by 3 | Viewed by 2738
Abstract
Previous research on financial time-series data mainly focused on the analysis of market evolution and trends, ignoring its characteristics in different resolutions and stages. This paper discusses the evolution characteristics of the financial market in different resolutions, and presents a method of complex [...] Read more.
Previous research on financial time-series data mainly focused on the analysis of market evolution and trends, ignoring its characteristics in different resolutions and stages. This paper discusses the evolution characteristics of the financial market in different resolutions, and presents a method of complex network analysis based on wavelet transform. The analysis method has proven the linkage effects of the plate sector in China’s stock market and has that found plate drift phenomenon occurred before and after the stock market crash. In addition, we also find two different evolutionary trends, namely the W-type and M-type trends. The discovery of linkage plate and drift phenomena are important and referential for enterprise investors to build portfolio investment strategy, and play an important role for policy makers in analyzing evolution characteristics of the stock market. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

17 pages, 377 KiB  
Article
g-Good-Neighbor Diagnosability of Arrangement Graphs under the PMC Model and MM* Model
by Shiying Wang and Yunxia Ren
Information 2018, 9(11), 275; https://doi.org/10.3390/info9110275 - 07 Nov 2018
Cited by 6 | Viewed by 2261
Abstract
Diagnosability of a multiprocessor system is an important research topic. The system and interconnection network has a underlying topology, which usually presented by a graph G = ( V , E ) . In 2012, a measurement for fault tolerance of the graph [...] Read more.
Diagnosability of a multiprocessor system is an important research topic. The system and interconnection network has a underlying topology, which usually presented by a graph G = ( V , E ) . In 2012, a measurement for fault tolerance of the graph was proposed by Peng et al. This measurement is called the g-good-neighbor diagnosability that restrains every fault-free node to contain at least g fault-free neighbors. Under the PMC model, to diagnose the system, two adjacent nodes in G are can perform tests on each other. Under the MM model, to diagnose the system, a node sends the same task to two of its neighbors, and then compares their responses. The MM* is a special case of the MM model and each node must test its any pair of adjacent nodes of the system. As a famous topology structure, the ( n , k ) -arrangement graph A n , k , has many good properties. In this paper, we give the g-good-neighbor diagnosability of A n , k under the PMC model and MM* model. Full article
(This article belongs to the Special Issue Graphs for Smart Communications Systems)
Show Figures

Figure 1

30 pages, 2555 KiB  
Article
Conversion of the English-Xhosa Dictionary for Nurses to a Linguistic Linked Data Framework
by Frances Gillis-Webber
Information 2018, 9(11), 274; https://doi.org/10.3390/info9110274 - 06 Nov 2018
Cited by 3 | Viewed by 4104
Abstract
The English-Xhosa Dictionary for Nurses (EXDN) is a bilingual, unidirectional printed dictionary in the public domain, with English and isiXhosa as the language pair. By extending the digitisation efforts of EXDN from a human-readable digital object to a machine-readable state, using Resource Description [...] Read more.
The English-Xhosa Dictionary for Nurses (EXDN) is a bilingual, unidirectional printed dictionary in the public domain, with English and isiXhosa as the language pair. By extending the digitisation efforts of EXDN from a human-readable digital object to a machine-readable state, using Resource Description Framework (RDF) as the data model, semantically interoperable structured data can be created, thus enabling EXDN’s data to be reused, aggregated and integrated with other language resources, where it can serve as a potential aid in the development of future language resources for isiXhosa, an under-resourced language in South Africa. The methodological guidelines for the construction of a Linguistic Linked Data framework (LLDF) for a lexicographic resource, as applied to EXDN, are described, where an LLDF can be defined as a framework: (1) which describes data in RDF, (2) using a model designed for the representation of linguistic information, (3) which adheres to Linked Data principles, and (4) which supports versioning, allowing for change. The result is a bidirectional lexicographic resource, previously bounded and static, now unbounded and evolving, with the ability to extend to multilingualism. Full article
(This article belongs to the Special Issue Towards the Multilingual Web of Data)
Show Figures

Figure 1

22 pages, 507 KiB  
Review
The Impact of Code Smells on Software Bugs: A Systematic Literature Review
by Aloisio S. Cairo, Glauco de F. Carneiro and Miguel P. Monteiro
Information 2018, 9(11), 273; https://doi.org/10.3390/info9110273 - 06 Nov 2018
Cited by 26 | Viewed by 7657
Abstract
Context: Code smells are associated to poor design and programming style, which often degrades code quality and hampers code comprehensibility and maintainability. Goal: identify published studies that provide evidence of the influence of code smells on the occurrence of software bugs. Method: We [...] Read more.
Context: Code smells are associated to poor design and programming style, which often degrades code quality and hampers code comprehensibility and maintainability. Goal: identify published studies that provide evidence of the influence of code smells on the occurrence of software bugs. Method: We conducted a Systematic Literature Review (SLR) to reach the stated goal. Results: The SLR selected studies from July 2007 to September 2017, which analyzed the source code of open source software projects and several code smells. Based on evidence of 16 studies covered in this SLR, we conclude that 24 code smells are more influential in the occurrence of bugs relative to the remaining smells analyzed. In contrast, three studies reported that at least 6 code smells are less influential in such occurrences. Evidence from the selected studies also point out tools, techniques, and procedures that should be applied to analyze the influence of the smells. Conclusions: To the best of our knowledge, this is the first SLR to target this goal. This study provides an up-to-date and structured understanding of the influence of code smells on the occurrence of software bugs based on findings systematically collected from a list of relevant references in the latest decade. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Show Figures

Figure 1

21 pages, 1578 KiB  
Article
Efficient Public Key Encryption with Disjunctive Keywords Search Using the New Keywords Conversion Method
by Yu Zhang, Yin Li and Yifan Wang
Information 2018, 9(11), 272; https://doi.org/10.3390/info9110272 - 01 Nov 2018
Cited by 1 | Viewed by 2941
Abstract
Public key encryption with disjunctive keyword search (PEDK) is a public key encryption scheme that allows disjunctive keyword search over encrypted data without decryption. This kind of scheme is crucial to cloud storage and has received a lot of attention in recent years. [...] Read more.
Public key encryption with disjunctive keyword search (PEDK) is a public key encryption scheme that allows disjunctive keyword search over encrypted data without decryption. This kind of scheme is crucial to cloud storage and has received a lot of attention in recent years. However, the efficiency of the previous scheme is limited due to the selection of a less efficient converting method which is used to change query and index keywords into a vector space model. To address this issue, we design a novel converting approach with better performance, and give two adaptively secure PEDK schemes based on this method. The first one is built on an efficient inner product encryption scheme with less searching time, and the second one is constructed over composite order bilinear groups with higher efficiency on index and trapdoor construction. The theoretical analysis and experiment results verify that our schemes are more efficient in time and space complexity as well as more suitable for the mobile cloud setting compared with the state-of-art schemes. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

17 pages, 2543 KiB  
Article
Remotely Monitoring Cancer-Related Fatigue Using the Smart-Phone: Results of an Observational Study
by Vanessa Christina Klaas, Gerhard Troster, Heinrich Walt and Josef Jenewein
Information 2018, 9(11), 271; https://doi.org/10.3390/info9110271 - 30 Oct 2018
Cited by 6 | Viewed by 3841
Abstract
Cancer related fatigue is a chronic disease that may persist up to 10 years after successful cancer treatment and is one of the most prevalent problems in cancer survivors. Cancer related fatigue is a complex symptom that is not yet explained completely and [...] Read more.
Cancer related fatigue is a chronic disease that may persist up to 10 years after successful cancer treatment and is one of the most prevalent problems in cancer survivors. Cancer related fatigue is a complex symptom that is not yet explained completely and there are only a few remedies with proven evidence. Patients do not necessarily follow a treatment plan with regular follow ups. As a consequence, physicians lack of knowledge how their patients are coping with their fatigue in daily life. To overcome this knowledge gap, we developed a smartphone-based monitoring system. A developed Android app provides activity data from smartphone sensors and applies experience based sampling to collect the patients’ subjective perceptions of their fatigue and interference of fatigue with the patients’ daily life. To evaluate the monitoring system in an observational study, we recruited seven patients suffering from cancer related fatigue and tracked them over two to three weeks. We collected around 2700 h of activity data and over 500 completed questionnaires. We analysed the average completion of answering the digital questionnaires and the wearing time of the smartphone. A within-subject analysis of the perceived fatigue, its interference and measured physical activity yielded in patient specific fatigue and activity patterns depending on the time of day. Physical activity level correlated stronger with the interference of fatigue than with the fatigue itself and the variance of the acceleration correlates stronger than absolute activity values. With this work, we provide a monitoring system used for cancer related fatigue. We show with an observational study that the monitoring system is accepted by our study cohort and that it provides additional details about the perceived fatigue and physical activity to a weekly paper-based questionnaire. Full article
(This article belongs to the Special Issue e-Health Pervasive Wireless Applications and Services (e-HPWAS'17))
Show Figures

Figure 1

20 pages, 3221 KiB  
Article
Modeling and Visualizing Smart City Mobility Business Ecosystems: Insights from a Case Study
by Anne Faber, Sven-Volker Rehm, Adrian Hernandez-Mendez and Florian Matthes
Information 2018, 9(11), 270; https://doi.org/10.3390/info9110270 - 29 Oct 2018
Cited by 25 | Viewed by 5757
Abstract
Smart mobility is a central issue in the recent discourse about urban development policy towards smart cities. The design of innovative and sustainable mobility infrastructures as well as public policies require cooperation and innovations between various stakeholders—businesses as well as policy makers—of the [...] Read more.
Smart mobility is a central issue in the recent discourse about urban development policy towards smart cities. The design of innovative and sustainable mobility infrastructures as well as public policies require cooperation and innovations between various stakeholders—businesses as well as policy makers—of the business ecosystems that emerge around smart city initiatives. This poses a challenge for deploying instruments and approaches for the proactive management of such business ecosystems. In this article, we report on findings from a smart city initiative we have used as a case study to inform the development, implementation, and prototypical deployment of a visual analytic system (VAS). As results of our design science research we present an agile framework to collaboratively collect, aggregate and map data about the ecosystem. The VAS and the agile framework are intended to inform and stimulate knowledge flows between ecosystem stakeholders in order to reflect on viable business and policy strategies. Agile processes and roles to collaboratively manage and adapt business ecosystem models and visualizations are defined. We further introduce basic categories for identifying, assessing and selecting Internet data sources that provide the data for ecosystem models and we detail the ecosystem data and view models developed in our case study. Our model represents a first explication of categories for visualizing business ecosystem models in a smart city mobility context. Full article
Show Figures

Figure 1

24 pages, 2071 KiB  
Article
Conceptualising and Modelling E-Recruitment Process for Enterprises through a Problem Oriented Approach
by Saleh Alamro, Huseyin Dogan, Deniz Cetinkaya, Nan Jiang and Keith Phalp
Information 2018, 9(11), 269; https://doi.org/10.3390/info9110269 - 29 Oct 2018
Cited by 5 | Viewed by 6615
Abstract
Internet-led labour market has become so competitive it is forcing many organisations from different sectors to embrace e-recruitment. However, realising the value of the e-recruitment from a Requirements Engineering (RE) analysis perspective is challenging. This research was motivated by the results of a [...] Read more.
Internet-led labour market has become so competitive it is forcing many organisations from different sectors to embrace e-recruitment. However, realising the value of the e-recruitment from a Requirements Engineering (RE) analysis perspective is challenging. This research was motivated by the results of a failed e-recruitment project conducted in military domain which was used as a case study. After reviewing the various challenges faced in that project through a number of related research domains, this research focused on two major problems: (1) the difficulty of scoping, representing, and systematically transforming recruitment problem knowledge towards e-recruitment solution specification; and (2) the difficulty of documenting e-recruitment best practices for reuse purposes in an enterprise recruitment environment. In this paper, a Problem-Oriented Conceptual Model (POCM) with a complementary Ontology for Recruitment Problem Definition (Onto-RPD) is proposed to contextualise the various recruitment problem viewpoints from an enterprise perspective, and to elaborate those problem viewpoints towards a comprehensive recruitment problem definition. POCM and Onto-RPD are developed incrementally using action-research conducted on three real case studies: (1) Secureland Army Enlistment; (2) British Army Regular Enlistment; and (3) UK Undergraduate Universities and Colleges Admissions Service (UCAS). They are later evaluated in a focus group study against a set of criteria. The study shows that POCM and Onto-RPD provide a strong foundation for representing and understanding the e-recruitment problems from different perspectives. Full article
Show Figures

Figure 1

25 pages, 330 KiB  
Article
Hybrid Metaheuristics to the Automatic Selection of Features and Members of Classifier Ensembles
by Antonino A. Feitosa Neto, Anne M. P. Canuto and João C. Xavier-Junior
Information 2018, 9(11), 268; https://doi.org/10.3390/info9110268 - 26 Oct 2018
Cited by 6 | Viewed by 2256
Abstract
Metaheuristic algorithms have been applied to a wide range of global optimization problems. Basically, these techniques can be applied to problems in which a good solution must be found, providing imperfect or incomplete knowledge about the optimal solution. However, the concept of combining [...] Read more.
Metaheuristic algorithms have been applied to a wide range of global optimization problems. Basically, these techniques can be applied to problems in which a good solution must be found, providing imperfect or incomplete knowledge about the optimal solution. However, the concept of combining metaheuristics in an efficient way has emerged recently, in a field called hybridization of metaheuristics or, simply, hybrid metaheuristics. As a result of this, hybrid metaheuristics can be successfully applied in different optimization problems. In this paper, two hybrid metaheuristics, MAMH (Multiagent Metaheuristic Hybridization) and MAGMA (Multiagent Metaheuristic Architecture), are adapted to be applied in the automatic design of ensemble systems, in both mono- and multi-objective versions. To validate the feasibility of these hybrid techniques, we conducted an empirical investigation, performing a comparative analysis between them and traditional metaheuristics as well as existing existing ensemble generation methods. Our findings demonstrate a competitive performance of both techniques, in which a hybrid technique provided the lowest error rate for most of the analyzed objective functions. Full article
(This article belongs to the Special Issue Multi-objective Evolutionary Feature Selection)
Show Figures

Figure 1

26 pages, 1077 KiB  
Article
Motivation Perspectives on Opening up Municipality Data: Does Municipality Size Matter?
by Anneke Zuiderwijk, Cécile Volten, Maarten Kroesen and Mark Gill
Information 2018, 9(11), 267; https://doi.org/10.3390/info9110267 - 25 Oct 2018
Cited by 10 | Viewed by 4141
Abstract
National governments often expect municipalities to develop toward open cities and be equally motivated to open up municipal data, yet municipalities have different characteristics influencing their motivations. This paper aims to reveal how municipality size influences municipalities’ motivation perspectives on opening up municipality [...] Read more.
National governments often expect municipalities to develop toward open cities and be equally motivated to open up municipal data, yet municipalities have different characteristics influencing their motivations. This paper aims to reveal how municipality size influences municipalities’ motivation perspectives on opening up municipality data. To this end, Q-methodology is used, which is a method that is suited to objectify people’s frames of mind on a particular topic. By applying this method to 37 municipalities in the Netherlands, we elicited the motivation perspectives of three main groups of municipalities: (1) advocating municipalities, (2) careful municipalities, and (3) conservative municipalities. We found that advocating municipalities are mainly large-sized municipalities (>65,000 inhabitants) and a few small-sized municipalities (<35,000 inhabitants). Careful municipalities concern municipalities of all sizes (small, medium, and large). The conservative municipality perspective is more common among smaller-sized municipalities. Our findings do not support the statement “the smaller the municipality, the less motivated it is to open up its data”. However, the type and amount of municipality resources do influence motivations to share data or not. We provide recommendations for how open data policy makers on the national level need to support the three groups of municipalities and municipalities of different sizes in different ways to stimulate the provision of municipal data to the public as much as possible. Moreover, if national governments can identify which municipalities adhere to which motivation perspective, they can then develop more targeted open data policies that meet the requirements of the municipalities that adhere to each perspective. This should result in more open data value creation. Full article
Show Figures

Figure 1

18 pages, 1026 KiB  
Article
ImplicPBDD: A New Approach to Extract Proper Implications Set from High-Dimension Formal Contexts Using a Binary Decision Diagram
by Phillip G. Santos, Pedro Henrique B. Ruas, Julio C. V. Neves, Paula R. Silva, Sérgio M. Dias, Luis E. Zárate and Mark A. J. Song
Information 2018, 9(11), 266; https://doi.org/10.3390/info9110266 - 25 Oct 2018
Cited by 2 | Viewed by 2621
Abstract
Formal concept analysis (FCA) is largely applied in different areas. However, in some FCA applications the volume of information that needs to be processed can become unfeasible. Thus, the demand for new approaches and algorithms that enable processing large amounts of information is [...] Read more.
Formal concept analysis (FCA) is largely applied in different areas. However, in some FCA applications the volume of information that needs to be processed can become unfeasible. Thus, the demand for new approaches and algorithms that enable processing large amounts of information is increasing substantially. This article presents a new algorithm for extracting proper implications from high-dimensional contexts. The proposed algorithm, called ImplicPBDD, was based on the PropIm algorithm, and uses a data structure called binary decision diagram (BDD) to simplify the representation of the formal context and enhance the extraction of proper implications. In order to analyze the performance of the ImplicPBDD algorithm, we performed tests using synthetic contexts varying the number of objects, attributes and context density. The experiments show that ImplicPBDD has a better performance—up to 80% faster—than its original algorithm, regardless of the number of attributes, objects and densities. Full article
Show Figures

Figure 1

13 pages, 497 KiB  
Article
On the Single-Parity Locally Repairable Codes with Multiple Repairable Groups
by Yanbo Lu, Xinji Liu and Shutao Xia
Information 2018, 9(11), 265; https://doi.org/10.3390/info9110265 - 24 Oct 2018
Viewed by 2200
Abstract
Locally repairable codes (LRCs) are a new family of erasure codes used in distributed storage systems which have attracted a great deal of interest in recent years. For an [ n , k , d ] linear code, if a code symbol can [...] Read more.
Locally repairable codes (LRCs) are a new family of erasure codes used in distributed storage systems which have attracted a great deal of interest in recent years. For an [ n , k , d ] linear code, if a code symbol can be repaired by t disjoint groups of other code symbols, where each group contains at most r code symbols, it is said to have availability- ( r , t ) . Single-parity LRCs are LRCs with a constraint that each repairable group contains exactly one parity symbol. For an [ n , k , d ] single-parity LRC with availability- ( r , t ) for the information symbols (single-parity LRCs), the minimum distance satisfies d n k k t / r + t + 1 . In this paper, we focus on the study of single-parity LRCs with availability- ( r , t ) for information symbols. Based on the standard form of generator matrices, we present a novel characterization of single-parity LRCs with availability t 1 . Then, a simple and straightforward proof for the Singleton-type bound is given based on the new characterization. Some necessary conditions for optimal single-parity LRCs with availability t 1 are obtained, which might provide some guidelines for optimal coding constructions. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

19 pages, 1040 KiB  
Article
Challenges and Opportunities of Named Data Networking in Vehicle-To-Everything Communication: A Review
by Benjamin Rainer and Stefan Petscharnig
Information 2018, 9(11), 264; https://doi.org/10.3390/info9110264 - 23 Oct 2018
Cited by 14 | Viewed by 3434
Abstract
Many car manufacturers have recently proposed to release autonomous self-driving cars within the next few years. Information gathered by sensors (e.g., cameras, GPS, lidar, radar, ultrasonic) enable cars to drive autonomously on roads. However, in urban or high-speed traffic scenarios the information gathered [...] Read more.
Many car manufacturers have recently proposed to release autonomous self-driving cars within the next few years. Information gathered by sensors (e.g., cameras, GPS, lidar, radar, ultrasonic) enable cars to drive autonomously on roads. However, in urban or high-speed traffic scenarios the information gathered by mounted sensors may not be sufficient to guarantee a smooth and safe traffic flow. Thus, information received from infrastructure and other cars or vehicles on the road is vital. Key aspects in Vehicle-To-Everything (V2X) communication are security, authenticity, and integrity which are inherently provided by Information Centric Networking (ICN). In this paper, we identify advantages and drawbacks of ICN for V2X communication. We specifically review forwarding, caching, as well as simulation aspects for V2X communication with a focus on ICN. Furthermore, we investigate existing solutions for V2X and discuss their applicability. Based on these investigations, we suggest directions for further work in context of ICN (in particular Named Data Networking) to enable V2X communication providing a secure and efficient transport platform. Full article
(This article belongs to the Special Issue Information-Centric Networking)
Show Figures

Figure 1

20 pages, 6700 KiB  
Article
Context-Aware Data Dissemination for ICN-Based Vehicular Ad Hoc Networks
by Yuhong Li, Xinyue Shi, Anders Lindgren, Zhuo Hu, Peng Zhang, Di Jin and Yingchao Zhou
Information 2018, 9(11), 263; https://doi.org/10.3390/info9110263 - 23 Oct 2018
Cited by 16 | Viewed by 3180
Abstract
Information-centric networking (ICN) technology matches many major requirements of vehicular ad hoc networks (VANETs) in terms of its connectionless networking paradigm accordant with the dynamic environments of VANETs and is increasingly being applied to VANETs. However, wireless transmissions of packets in VANETs using [...] Read more.
Information-centric networking (ICN) technology matches many major requirements of vehicular ad hoc networks (VANETs) in terms of its connectionless networking paradigm accordant with the dynamic environments of VANETs and is increasingly being applied to VANETs. However, wireless transmissions of packets in VANETs using ICN mechanisms can lead to broadcast storms and channel contention, severely affecting the performance of data dissemination. At the same time, frequent changes of topology due to driving at high speeds and environmental obstacles can also lead to link interruptions when too few vehicles are involved in data forwarding. Hence, balancing the number of forwarding vehicular nodes and the number of copies of packets that are forwarded is essential for improving the performance of data dissemination in information-centric networking for vehicular ad-hoc networks. In this paper, we propose a context-aware packet-forwarding mechanism for ICN-based VANETs. The relative geographical position of vehicles, the density and relative distribution of vehicles, and the priority of content are considered during the packet forwarding. Simulation results show that the proposed mechanism can improve the performance of data dissemination in ICN-based VANET in terms of a successful data delivery ratio, packet loss rate, bandwidth usage, data response time, and traversed hops. Full article
(This article belongs to the Special Issue Information-Centric Networking)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop