Previous Issue

Table of Contents

Information, Volume 9, Issue 7 (July 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-26
Export citation of selected articles as:
Open AccessArticle LOD for Data Warehouses: Managing the Ecosystem Co-Evolution
Information 2018, 9(7), 174; https://doi.org/10.3390/info9070174 (registering DOI)
Received: 9 June 2018 / Revised: 9 July 2018 / Accepted: 11 July 2018 / Published: 17 July 2018
PDF Full-text (1263 KB) | HTML Full-text | XML Full-text
Abstract
For more than 30 years, data warehouses (DWs) have attracted particular interest both in practice and in research. This success is explained by their ability to adapt to their evolving environment. One of the last challenges for DWs is their
[...] Read more.
For more than 30 years, data warehouses (DWs) have attracted particular interest both in practice and in research. This success is explained by their ability to adapt to their evolving environment. One of the last challenges for DWs is their ability to open their frontiers to external data sources in addition to internal sources. The development of linked open data (LOD) as external sources is an excellent opportunity to create added value and enrich the analytical capabilities of DWs. However, the incorporation of LOD in the DW must be accompanied by careful management. In this paper, we are interested in managing the evolution of DW systems integrating internal and external LOD datasets. The particularity of LOD is that they contribute to evolving the DW at several levels: (i) source level, (ii) DW schema level, and (iii) DW design-cycle constructs. In this context, we have to ensure this co-evolution, as conventional evolution approaches are adapted neither to this new kind of source nor to semantic constructs underlying LOD sources. One way of tackling this co-evolution issue is to ensure the traceability of DW constructs for the whole design cycle. Our approach is tested using: the LUBM (Lehigh University BenchMark), different LOD datasets (DBepedia, YAGO, etc.), and Oracle 12c database management system (DBMS) used for the DW deployment. Full article
(This article belongs to the Special Issue Semantics for Big Data Integration)
Figures

Figure 1

Open AccessArticle Adaptive Multiswarm Comprehensive Learning Particle Swarm Optimization
Information 2018, 9(7), 173; https://doi.org/10.3390/info9070173
Received: 21 June 2018 / Revised: 12 July 2018 / Accepted: 13 July 2018 / Published: 15 July 2018
PDF Full-text (2077 KB) | HTML Full-text | XML Full-text
Abstract
Multiswarm comprehensive learning particle swarm optimization (MSCLPSO) is a multiobjective metaheuristic recently proposed by the authors. MSCLPSO uses multiple swarms of particles and externally stores elitists that are nondominated solutions found so far. MSCLPSO can approximate the true Pareto front in one single
[...] Read more.
Multiswarm comprehensive learning particle swarm optimization (MSCLPSO) is a multiobjective metaheuristic recently proposed by the authors. MSCLPSO uses multiple swarms of particles and externally stores elitists that are nondominated solutions found so far. MSCLPSO can approximate the true Pareto front in one single run; however, it requires a large number of generations to converge, because each swarm only optimizes the associated objective and does not learn from any search experience outside the swarm. In this paper, we propose an adaptive particle velocity update strategy for MSCLPSO to improve the search efficiency. Based on whether the elitists are indifferent or complex on each dimension, each particle adaptively determines whether to just learn from some particle in the same swarm, or additionally from the difference of some pair of elitists for the velocity update on that dimension, trying to achieve a tradeoff between optimizing the associated objective and exploring diverse regions of the Pareto set. Experimental results on various two-objective and three-objective benchmark optimization problems with different dimensional complexity characteristics demonstrate that the adaptive particle velocity update strategy improves the search performance of MSCLPSO significantly and is able to help MSCLPSO locate the true Pareto front more quickly and obtain better distributed nondominated solutions over the entire Pareto front. Full article
Figures

Graphical abstract

Open AccessArticle Near-Extremal Type I Self-Dual Codes with Minimal Shadow over GF(2) and GF(4)
Information 2018, 9(7), 172; https://doi.org/10.3390/info9070172
Received: 23 June 2018 / Revised: 11 July 2018 / Accepted: 11 July 2018 / Published: 13 July 2018
PDF Full-text (281 KB) | HTML Full-text | XML Full-text
Abstract
Binary self-dual codes and additive self-dual codes over GF(4) contain common points. Both have Type I codes and Type II codes, as well as shadow codes. In this paper, we provide a comprehensive description of extremal and near-extremal Type
[...] Read more.
Binary self-dual codes and additive self-dual codes over GF(4) contain common points. Both have Type I codes and Type II codes, as well as shadow codes. In this paper, we provide a comprehensive description of extremal and near-extremal Type I codes over GF(2) and GF(4) with minimal shadow. In particular, we prove that there is no near-extremal Type I [24m,12m,2m+2] binary self-dual code with minimal shadow if m323, and we prove that there is no near-extremal Type I (6m+1,26m+1,2m+1) additive self-dual code over GF(4) with minimal shadow if m22. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessEditorial Special Issue on Selected Papers from IVAPP 2018
Information 2018, 9(7), 171; https://doi.org/10.3390/info9070171
Received: 12 July 2018 / Accepted: 12 July 2018 / Published: 13 July 2018
PDF Full-text (143 KB) | HTML Full-text | XML Full-text
Abstract
Recent developments at the crossroads of data science, datamining,machine learning, and graphics and imaging sciences have further established information visualization and visual analytics as central disciplines that deliver methods, techniques, and tools for making sense of and extracting actionable insights and results fromlarge
[...] Read more.
Recent developments at the crossroads of data science, datamining,machine learning, and graphics and imaging sciences have further established information visualization and visual analytics as central disciplines that deliver methods, techniques, and tools for making sense of and extracting actionable insights and results fromlarge amounts of complex,multidimensional, hybrid, and time-dependent data.[...] Full article
(This article belongs to the Special Issue Selected Papers from IVAPP 2018)
Open AccessArticle A Deploying Method for Predicting the Size and Optimizing the Location of an Electric Vehicle Charging Stations
Information 2018, 9(7), 170; https://doi.org/10.3390/info9070170
Received: 10 June 2018 / Revised: 11 July 2018 / Accepted: 11 July 2018 / Published: 13 July 2018
PDF Full-text (2479 KB) | HTML Full-text | XML Full-text
Abstract
With the depletion of oil resources and the aggravation of environmental pollution, electric vehicles have a growing future and will be more popular as the main force of new energy consumption. They have attracted greater attention from various countries. The sizing and location
[...] Read more.
With the depletion of oil resources and the aggravation of environmental pollution, electric vehicles have a growing future and will be more popular as the main force of new energy consumption. They have attracted greater attention from various countries. The sizing and location problem for charging stations has been a hot point of global research, and these issues are important for government planning for electric vehicles. In this paper, we first built a BASS model to predict the total number of electric vehicles and calculate the size of charging stations in the coming years. Moreover, we also developed a queuing model to optimize the location of charging stations and solved this issue by using the exhaustion method, which regards minimum cost as the objective function. After that, the model was tested using data from a city in China. The results show that the model in this paper is good at predicting the number of electric vehicles in the coming years and calculating the size of charging stations. At the same time, it can also optimize the distribution of charging stations and make them more balanceable. Thus, this model is beneficial for the government in planning the development of electric vehicles in the future. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle Aerial-Image Denoising Based on Convolutional Neural Network with Multi-Scale Residual Learning Approach
Information 2018, 9(7), 169; https://doi.org/10.3390/info9070169
Received: 13 May 2018 / Revised: 24 June 2018 / Accepted: 3 July 2018 / Published: 9 July 2018
PDF Full-text (8560 KB) | HTML Full-text | XML Full-text
Abstract
Aerial images are subject to various types of noise, which restricts the recognition and analysis of images, target monitoring, and search services. At present, deep learning is successful in image recognition. However, traditional convolutional neural networks (CNNs) extract the main features of an
[...] Read more.
Aerial images are subject to various types of noise, which restricts the recognition and analysis of images, target monitoring, and search services. At present, deep learning is successful in image recognition. However, traditional convolutional neural networks (CNNs) extract the main features of an image to predict directly and are limited by the requirements of the training sample size (i.e., a small data size is not successful enough). In this paper, using a small sample size, we propose an aerial-image denoising recognition model based on CNNs with a multi-scale residual learning approach. The proposed model has the following three advantages: (1) Instead of directly learning latent clean images, the proposed model learns the noise from noisy images and then subtracts the learned residual from the noisy images to obtain reconstructed (denoised) images; (2) The developed image denoising recognition model is beneficial to small training datasets; (3) We use multi-scale residual learning as a learning approach, and dropout is introduced into the model architecture to force the network to learn to generalize well enough. Our experimental results on aerial-image denoising recognition reveal that the proposed approach is highly superior to the other state-of-the-art methods. Full article
Figures

Figure 1

Open AccessArticle Fundamentals of Natural Representation
Information 2018, 9(7), 168; https://doi.org/10.3390/info9070168
Received: 5 April 2018 / Revised: 7 June 2018 / Accepted: 4 July 2018 / Published: 9 July 2018
PDF Full-text (549 KB) | HTML Full-text | XML Full-text
Abstract
Our understanding of the natural universe is far from being comprehensive. The following questions bring to the fore some of the fundamental issues. Is there a reality of information associated with the states of matter based entirely on natural causation? If so, then
[...] Read more.
Our understanding of the natural universe is far from being comprehensive. The following questions bring to the fore some of the fundamental issues. Is there a reality of information associated with the states of matter based entirely on natural causation? If so, then what constitutes the mechanism of information exchange (processing) at each interaction of physical entities? Let the association of information with a state of matter be referred to as the representation of semantic value expressed by the information. We ask, can the semantic value be quantified, described, and operated upon with symbols, as mathematical symbols describe the material world? In this work, these questions are dealt with substantively to establish the fundamental principles of the mechanisms of representation and propagation of information with every physical interaction. A quantitative method of information processing is derived from the first principles to show how high level structured and abstract semantics may arise via physical interactions alone, without a need for an intelligent interpreter. It is further shown that the natural representation constitutes a basis for the description, and therefore, for comprehension, of all natural phenomena, creating a more holistic view of nature. A brief discussion underscores the natural information processing as the foundation for the genesis of language and mathematics. In addition to the derivation of theoretical basis from established observations, the method of information processing is further demonstrated by a computer simulation. Full article
Figures

Figure 1

Open AccessArticle An Improved Genetic Algorithm with a New Initialization Mechanism Based on Regression Techniques
Information 2018, 9(7), 167; https://doi.org/10.3390/info9070167
Received: 8 May 2018 / Revised: 18 June 2018 / Accepted: 4 July 2018 / Published: 7 July 2018
PDF Full-text (7786 KB) | HTML Full-text | XML Full-text
Abstract
Genetic algorithm (GA) is one of the well-known techniques from the area of evolutionary computation that plays a significant role in obtaining meaningful solutions to complex problems with large search space. GAs involve three fundamental operations after creating an initial population, namely selection,
[...] Read more.
Genetic algorithm (GA) is one of the well-known techniques from the area of evolutionary computation that plays a significant role in obtaining meaningful solutions to complex problems with large search space. GAs involve three fundamental operations after creating an initial population, namely selection, crossover, and mutation. The first task in GAs is to create an appropriate initial population. Traditionally GAs with randomly selected population is widely used as it is simple and efficient; however, the generated population may contain poor fitness. Low quality or poor fitness of individuals may lead to take long time to converge to an optimal (or near-optimal) solution. Therefore, the fitness or quality of initial population of individuals plays a significant role in determining an optimal or near-optimal solution. In this work, we propose a new method for the initial population seeding based on linear regression analysis of the problem tackled by the GA; in this paper, the traveling salesman problem (TSP). The proposed Regression-based technique divides a given large scale TSP problem into smaller sub-problems. This is done using the regression line and its perpendicular line, which allow for clustering the cities into four sub-problems repeatedly, the location of each city determines which category/cluster the city belongs to, the algorithm works repeatedly until the size of the subproblem becomes very small, four cities or less for instance, these cities are more likely neighboring each other, so connecting them to each other creates a somehow good solution to start with, this solution is mutated several times to form the initial population. We analyze the performance of the GA when using traditional population seeding techniques, such as the random and nearest neighbors, along with the proposed regression-based technique. The experiments are carried out using some of the well-known TSP instances obtained from the TSPLIB, which is the standard library for TSP problems. Quantitative analysis is carried out using the statistical test tools: analysis of variance (ANOVA), Duncan multiple range test (DMRT), and least significant difference (LSD). The experimental results show that the performance of the GA that uses the proposed regression-based technique for population seeding outperforms other GAs that uses traditional population seeding techniques such as the random and the nearest neighbor based techniques in terms of error rate, and average convergence. Full article
Figures

Graphical abstract

Open AccessArticle Analysis of Conversation Competencies in Strategic Alignment between Business Areas (External Control) and Information Technology Areas in a Control Body
Information 2018, 9(7), 166; https://doi.org/10.3390/info9070166
Received: 13 June 2018 / Revised: 4 July 2018 / Accepted: 4 July 2018 / Published: 7 July 2018
PDF Full-text (320 KB) | HTML Full-text | XML Full-text
Abstract
The process of governance in the domain of Information and Communication Technologies (ICT) has been the subject of many studies in recent years, especially as regards the strategic alignment between the business and ICT areas. However, only a handful of those studies focused
[...] Read more.
The process of governance in the domain of Information and Communication Technologies (ICT) has been the subject of many studies in recent years, especially as regards the strategic alignment between the business and ICT areas. However, only a handful of those studies focused on studying the relationships that exist between these areas, specifically the conversation competencies that so strongly influence their alignment. This study sought to investigate and analyze the gaps that exist in such conversation competencies, as found in a Brazilian Control Body, according to the perceptions of the officers in the business and ICT areas. The survey tool used here was a questionnaire, sent to all the officers of the Body’s areas, the construction of which was based on the conversation competencies. It was found that there were 28 gaps in the conversation competencies of the Brazilian Control Body that may be developed to improve the alignment of the business and ICT areas. As regards the paths for future work, a recommendation is made for the creation of a research tool that allows the verification of the percentage of alignment that exists between ICT services and the business requirements, as its application. Full article
Figures

Figure 1

Open AccessArticle Long-Short-Term Memory Network Based Hybrid Model for Short-Term Electrical Load Forecasting
Information 2018, 9(7), 165; https://doi.org/10.3390/info9070165
Received: 11 June 2018 / Revised: 3 July 2018 / Accepted: 4 July 2018 / Published: 7 July 2018
PDF Full-text (2842 KB) | HTML Full-text | XML Full-text
Abstract
Short-term electrical load forecasting is of great significance to the safe operation, efficient management, and reasonable scheduling of the power grid. However, the electrical load can be affected by different kinds of external disturbances, thus, there exist high levels of uncertainties in the
[...] Read more.
Short-term electrical load forecasting is of great significance to the safe operation, efficient management, and reasonable scheduling of the power grid. However, the electrical load can be affected by different kinds of external disturbances, thus, there exist high levels of uncertainties in the electrical load time series data. As a result, it is a challenging task to obtain accurate forecasting of the short-term electrical load. In order to further improve the forecasting accuracy, this study combines the data-driven long-short-term memory network (LSTM) and extreme learning machine (ELM) to present a hybrid model-based forecasting method for the prediction of short-term electrical loads. In this hybrid model, the LSTM is adopted to extract the deep features of the electrical load while the ELM is used to model the shallow patterns. In order to generate the final forecasting result, the predicted results of the LSTM and ELM are ensembled by the linear regression method. Finally, the proposed method is applied to two real-world electrical load forecasting problems, and detailed experiments are conducted. In order to verify the superiority and advantages of the proposed hybrid model, it is compared with the LSTM model, the ELM model, and the support vector regression (SVR). Experimental and comparison results demonstrate that the proposed hybrid model can give satisfactory performance and can achieve much better performance than the comparative methods in this short-term electrical load forecasting application. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle Linking Open Descriptions of Social Events (LODSE): A New Ontology for Social Event Classification
Information 2018, 9(7), 164; https://doi.org/10.3390/info9070164
Received: 4 June 2018 / Revised: 23 June 2018 / Accepted: 27 June 2018 / Published: 4 July 2018
PDF Full-text (1534 KB) | HTML Full-text | XML Full-text
Abstract
The digital era has brought a number of significant changes in the world of communications. Although technological evolution has allowed the creation of new social event platforms to disclose events, it is still difficult to know what is happening around a location. Currently,
[...] Read more.
The digital era has brought a number of significant changes in the world of communications. Although technological evolution has allowed the creation of new social event platforms to disclose events, it is still difficult to know what is happening around a location. Currently, a large number of social events are created and promoted on social networks. With the massive quantity of information created in these systems, finding an event is challenging because sometimes the data is ambiguous or incomplete. One of the main challenges in social event classification is related to the incompleteness and ambiguity of metadata created by users. This paper presents a new ontology, named LODSE (Linking Open Descriptions of Social Events) based on the LODE (Linking Open Descriptions of Events) ontology to describe the domain model of social events. The aim of this ontology is to create a data model that allows definition of the most important properties to describe a social event and to improve the classification of events. The proposed data model is used in an experimental evaluation to compare both ontologies in social event classification. The experimental evaluation, using a dataset based on real data from a popular social network, demonstrated that the data model based on the LODSE ontology brings several benefits in the classification of events. Using the LODSE ontology, the results show an increment of correctly classified events as well as a gain in execution time, when comparing with the data model based on the LODE ontology. Full article
Figures

Figure 1

Open AccessArticle A Top-Down Interactive Visual Analysis Approach for Physical Simulation Ensembles at Different Aggregation Levels
Information 2018, 9(7), 163; https://doi.org/10.3390/info9070163
Received: 30 April 2018 / Revised: 13 June 2018 / Accepted: 27 June 2018 / Published: 3 July 2018
PDF Full-text (5493 KB) | HTML Full-text | XML Full-text
Abstract
Physical simulations aim at modeling and computing spatio-temporal phenomena. As the simulations depend on initial conditions and/or parameter settings whose impact is to be investigated, a larger number of simulation runs is commonly executed. Analyzing all facets of such multi-run multi-field spatio-temporal simulation
[...] Read more.
Physical simulations aim at modeling and computing spatio-temporal phenomena. As the simulations depend on initial conditions and/or parameter settings whose impact is to be investigated, a larger number of simulation runs is commonly executed. Analyzing all facets of such multi-run multi-field spatio-temporal simulation data poses a challenge for visualization. It requires the design of different visual encodings that aggregate information in multiple ways and at multiple abstraction levels. We present a top-down interactive visual analysis tool of multi-run data from physical simulations named MultiVisA that is based on plots at different aggregation levels. The most aggregated visual representation is a histogram-based plot that allows for the investigation of the distribution of function values within all simulation runs. When expanding over time, a density-based time-series plot allows for the detection of temporal patterns and outliers within the ensemble of multiple runs for single and multiple fields. Finally, not aggregating over runs in a similarity-based plot allows for the comparison of multiple or individual runs and their behavior over time. Coordinated views allow for linking the plots of the three aggregation levels to spatial visualizations in physical space. We apply MultiVisA to physical simulations from the field of climate research and astrophysics. We document the analysis process, demonstrate its effectiveness, and provide evaluations involving domain experts. Full article
(This article belongs to the Special Issue Selected Papers from IVAPP 2018)
Figures

Figure 1

Open AccessArticle A Green Supplier Assessment Method for Manufacturing Enterprises Based on Rough ANP and Evidence Theory
Information 2018, 9(7), 162; https://doi.org/10.3390/info9070162
Received: 10 May 2018 / Revised: 25 June 2018 / Accepted: 28 June 2018 / Published: 2 July 2018
PDF Full-text (892 KB) | HTML Full-text | XML Full-text
Abstract
Within the context of increasingly serious global environmental problems, green supplier assessment has become one of the key links in modern green supply chain management. In the actual work of green supplier assessment, the information of potential suppliers is often ambiguous or even
[...] Read more.
Within the context of increasingly serious global environmental problems, green supplier assessment has become one of the key links in modern green supply chain management. In the actual work of green supplier assessment, the information of potential suppliers is often ambiguous or even absent, and there are interrelationships and feedback-like effects among assessment indexes. Additionally, the thinking of experts in index importance judgment is always ambiguous and subjective. To handle the uncertainty and incompleteness in green supplier assessment, we propose a green supplier assessment method based on rough ANP and evidence theory. The uncertain index value is processed by membership degree. Trapezoidal fuzzy number is adopted to express experts’ judgment on the relative importance of the indexes, and rough boundary interval is used to integrate the judgment opinions of multiple experts. The ANP structure is built to deal with the interrelationship and feedback-like effects among indexes. Then, the index weight is calculated by ANP method. Finally, the green suppliers are assessed by a trust interval, based on evidence theory. The feasibility and effectiveness of the proposed method is verified by an application of a bearing cage supplier assessment. Full article
Figures

Figure 1

Open AccessArticle AAC Double Compression Audio Detection Algorithm Based on the Difference of Scale Factor
Information 2018, 9(7), 161; https://doi.org/10.3390/info9070161
Received: 24 May 2018 / Revised: 22 June 2018 / Accepted: 28 June 2018 / Published: 2 July 2018
PDF Full-text (3462 KB) | HTML Full-text | XML Full-text
Abstract
Audio dual compression detection is an important part of audio forensics. It is of great significance to judge whether the audio has been falsified and forged. This study found that the advanced audio coding (AAC) audio scale factor gradually decreases with the number
[...] Read more.
Audio dual compression detection is an important part of audio forensics. It is of great significance to judge whether the audio has been falsified and forged. This study found that the advanced audio coding (AAC) audio scale factor gradually decreases with the number of compressions increases. Based on this, we propose an AAC double compression audio detection algorithm based on the statistical characteristics of the scale factor difference before and after audio re-compression. The experimental results show that the algorithm can accurately classify dual compressed AAC audio. The average accuracy of AAC audio classification between low-bit-rate transcoding to high-bit-rate is 99.91%, and the accuracy rate between the same bit rate is 97.98%. In addition, experiments with different durations, different noises, and different encoders also proved the better performance of this algorithm. Full article
Figures

Figure 1

Open AccessArticle Using the Logistic Coupled Map for Public Key Cryptography under a Distributed Dynamics Encryption Scheme
Information 2018, 9(7), 160; https://doi.org/10.3390/info9070160
Received: 14 May 2018 / Revised: 26 June 2018 / Accepted: 29 June 2018 / Published: 2 July 2018
PDF Full-text (5106 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, there is a high necessity to create new and robust cryptosystems. Dynamical systems have promised to develop crypto-systems due to the close relationship between them and the cryptographic requirements. Distributed dynamic encryption (DDE) represents the first mathematical method to generate a public-key
[...] Read more.
Nowadays, there is a high necessity to create new and robust cryptosystems. Dynamical systems have promised to develop crypto-systems due to the close relationship between them and the cryptographic requirements. Distributed dynamic encryption (DDE) represents the first mathematical method to generate a public-key cryptosystem based on chaotic dynamics. However, it has been described that the DDE proposal has a weak point in the decryption process related to efficiency and practicality. In this work, we adapted the DDE to a low-dimensional chaotic system to evaluate the weakness and security of the adaption in a realistic example. Specifically, we used a non-symmetric logistic coupled map, which is known to have multiple chaotic attractors improving the shortcomings related to the simple logistic map that manifests its inadequacy for cryptographic applications. We found a full implementation with acceptable computational cost and speed for DDE, which it is essential because it provides a key cryptographic requirement for chaos-based cryptosystems. Full article
(This article belongs to the Section Information Theory and Methodology)
Figures

Figure 1

Open AccessArticle A Bloom Filter for High Dimensional Vectors
Information 2018, 9(7), 159; https://doi.org/10.3390/info9070159
Received: 25 April 2018 / Revised: 8 June 2018 / Accepted: 20 June 2018 / Published: 2 July 2018
PDF Full-text (2295 KB) | HTML Full-text | XML Full-text
Abstract
Regardless of the type of data, traditional Bloom filters treat each element of a set as a string, and by iterating every character of the string, they discretize all data randomly and uniformly. However, with the data size and dimension increases, these variants
[...] Read more.
Regardless of the type of data, traditional Bloom filters treat each element of a set as a string, and by iterating every character of the string, they discretize all data randomly and uniformly. However, with the data size and dimension increases, these variants are inefficient. To better discretize vectors with high numerical dimensions, this paper improves the string hashes to integer hashes. Based on the integer hashes and a counter array, we propose a new variant—high-dimensional bloom filter (HDBF)—to extend the Bloom filter into high-dimensional spaces, which can represent and query numerical vectors of a big set with a low false positive probability. This paper theoretically analyzes the feasibility of the integer hashes on discretizing data and discusses the relationship of parameters of the HDBF. The experiments illustrate that, in high-dimensional numerical spaces, the HDBF shows better randomness on distribution and entropy than that of the counting Bloom filter. Compared with the parallel Bloom filters, for a fixed false positive probability, the HDBF displays time-space overheads, and is more suitable to deal with the numerical vectors with high dimensions. Full article
Figures

Figure 1

Open AccessArticle Research on the Weighted Dynamic Evolution Model for Space Information Networks Based on Local-World
Information 2018, 9(7), 158; https://doi.org/10.3390/info9070158
Received: 9 June 2018 / Revised: 23 June 2018 / Accepted: 28 June 2018 / Published: 29 June 2018
PDF Full-text (3253 KB) | HTML Full-text | XML Full-text
Abstract
As an important national strategy infrastructure, the Space Information Network (SIN) is a powerful platform for future information support, and it plays an important role in many aspects such as national defense, people’s livelihood, etc. In this paper, we review typical and mainstream
[...] Read more.
As an important national strategy infrastructure, the Space Information Network (SIN) is a powerful platform for future information support, and it plays an important role in many aspects such as national defense, people’s livelihood, etc. In this paper, we review typical and mainstream topology evolution models in different periods, and analyze the demand for studying the dynamic evolution model of SIN. Combining the concept and characteristics, we analyze the topology structure and local-world phenomenon of SIN, and define the dynamic topology model. Based on the system’s discussion of dynamic evolution rules, we propose a weighted local-world dynamic evolution model of SIN including the construction algorithm and implementation process. We achieve the quantitative analysis from four indicators including the node degree, node strength, edge weight, and correlation of strength and degree. Through the univariate control method, we analyze the impact on the network topology features of parameters: local-world M and extra traffic load α. Simulation results show that they have similar topology structure features with the theoretical analysis and real networks, and also verify the validity and feasibility of the proposed model. Finally, we summarize the advantages and disadvantages of the weighted local-world dynamic evolution model of SIN, and look forward to the future work. The research aim of this paper is to provide some methods and techniques to support the construction and management of SIN. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessArticle Pythagorean Fuzzy Interaction Muirhead Means with Their Application to Multi-Attribute Group Decision-Making
Information 2018, 9(7), 157; https://doi.org/10.3390/info9070157
Received: 4 June 2018 / Revised: 23 June 2018 / Accepted: 23 June 2018 / Published: 27 June 2018
PDF Full-text (369 KB) | HTML Full-text | XML Full-text
Abstract
Due to the increased complexity of real decision-making problems, representing attribute values correctly and appropriately is always a challenge. The recently proposed Pythagorean fuzzy set (PFS) is a powerful and useful tool for handling fuzziness and vagueness. The feature of PFS that the
[...] Read more.
Due to the increased complexity of real decision-making problems, representing attribute values correctly and appropriately is always a challenge. The recently proposed Pythagorean fuzzy set (PFS) is a powerful and useful tool for handling fuzziness and vagueness. The feature of PFS that the square sum of membership and non-membership degrees should be less than or equal to one provides more freedom for decision makers to express their assessments and further results in less information loss. The aim of this paper is to develop some Pythagorean fuzzy aggregation operators to aggregate Pythagorean fuzzy numbers (PFNs). Additionally, we propose a novel approach to multi-attribute group decision-making (MAGDM) based on the proposed operators. Considering the Muirhead mean (MM) can capture the interrelationship among all arguments, and the interaction operational rules for PFNs can make calculation results more reasonable, to take full advantage of both, we extend MM to PFSs and propose a family of Pythagorean fuzzy interaction Muirhead mean operators. Some desirable properties and special cases of the proposed operators are also investigated. Further, we present a novel approach to MAGDM with Pythagorean fuzzy information. Finally, we provide a numerical instance to illustrate the validity of the proposed model. In addition, we perform a comparative analysis to show the superiorities of the proposed method. Full article
Open AccessArticle Upsampling for Improved Multidimensional Attribute Space Clustering of Multifield Data
Information 2018, 9(7), 156; https://doi.org/10.3390/info9070156
Received: 11 May 2018 / Revised: 15 June 2018 / Accepted: 20 June 2018 / Published: 27 June 2018
PDF Full-text (1468 KB) | HTML Full-text | XML Full-text
Abstract
Clustering algorithms in the high-dimensional space require many data to perform reliably and robustly. For multivariate volume data, it is possible to interpolate between the data points in the high-dimensional attribute space based on their spatial relationship in the volumetric domain (or physical
[...] Read more.
Clustering algorithms in the high-dimensional space require many data to perform reliably and robustly. For multivariate volume data, it is possible to interpolate between the data points in the high-dimensional attribute space based on their spatial relationship in the volumetric domain (or physical space). Thus, sufficiently high number of data points can be generated, overcoming the curse of dimensionality for this particular type of multidimensional data. We applies this idea to a histogram-based clustering algorithm. We created a uniform partition of the attribute space in multidimensional bins and computed a histogram indicating the number of data samples belonging to each bin. Without interpolation, the analysis was highly sensitive to the histogram cell sizes, yielding inaccurate clustering for improper choices: Large histogram cells result in no cluster separation, while clusters fall apart for small cells. Using an interpolation in physical space, we could refine the data by generating additional samples. The depth of the refinement scheme was chosen according to the local data point distribution in attribute space and the histogram’s bin size. In the case of field discontinuities representing sharp material boundaries in the volume data, the interpolation can be adapted to locally make use of a nearest-neighbor interpolation scheme that avoids averaging values across the sharp boundary. Consequently, we could generate a density computation, where clusters stay connected even when using very small bin sizes. We exploited this result to create a robust hierarchical cluster tree, apply our technique to several datasets, and compare the cluster trees before and after interpolation. Full article
(This article belongs to the Special Issue Selected Papers from IVAPP 2018)
Figures

Figure 1

Open AccessArticle Efficient Low-Resource Compression of HIFU Data
Information 2018, 9(7), 155; https://doi.org/10.3390/info9070155
Received: 10 May 2018 / Revised: 11 June 2018 / Accepted: 24 June 2018 / Published: 26 June 2018
PDF Full-text (456 KB) | HTML Full-text | XML Full-text
Abstract
Large-scale numerical simulations of high-intensity focused ultrasound (HIFU), important for model-based treatment planning, generate large amounts of data. Typically, it is necessary to save hundreds of gigabytes during simulation. We propose a novel algorithm for time-varying simulation data compression specialised for HIFU. Our
[...] Read more.
Large-scale numerical simulations of high-intensity focused ultrasound (HIFU), important for model-based treatment planning, generate large amounts of data. Typically, it is necessary to save hundreds of gigabytes during simulation. We propose a novel algorithm for time-varying simulation data compression specialised for HIFU. Our approach is particularly focused on on-the-fly parallel data compression during simulations. The algorithm is able to compress 3D pressure time series of linear and non-linear simulations with very acceptable compression ratios and errors (over 80% of the space can be saved with an acceptable error). The proposed compression enables significant reduction of resources, such as storage space, network bandwidth, CPU time, and so forth, enabling better treatment planning using fast volume data visualisations. The paper describes the proposed method, its experimental evaluation, and comparisons to the state of the arts. Full article
(This article belongs to the Special Issue Information-Centered Healthcare)
Figures

Figure 1

Open AccessArticle An Evolutionary Algorithm for an Optimization Model of Edge Bundling
Information 2018, 9(7), 154; https://doi.org/10.3390/info9070154
Received: 2 May 2018 / Revised: 21 June 2018 / Accepted: 23 June 2018 / Published: 26 June 2018
PDF Full-text (15815 KB) | HTML Full-text | XML Full-text
Abstract
This paper discusses three edge bundling optimization problems that aim to minimize the total number of bundles of a graph drawing, in conjunction with other aspects, as the main goal. A novel evolutionary algorithm for edge bundling for these problems is described. The
[...] Read more.
This paper discusses three edge bundling optimization problems that aim to minimize the total number of bundles of a graph drawing, in conjunction with other aspects, as the main goal. A novel evolutionary algorithm for edge bundling for these problems is described. The algorithm was successfully tested by solving the related problems applied to real-world instances in reasonable computational time. The development and analysis of optimization models have received little attention in the area of edge bundling. However, the reported experimental results demonstrate the effectiveness and the applicability of the proposed evolutionary algorithm to help resolve edge bundling problems by formally defining them as optimization models. Full article
(This article belongs to the Special Issue Selected Papers from IVAPP 2018)
Figures

Figure 1

Open AccessArticle More Compact Orthogonal Drawings by Allowing Additional Bends
Information 2018, 9(7), 153; https://doi.org/10.3390/info9070153
Received: 30 April 2018 / Revised: 19 June 2018 / Accepted: 21 June 2018 / Published: 26 June 2018
PDF Full-text (2334 KB) | HTML Full-text | XML Full-text
Abstract
Compacting orthogonal drawings is a challenging task. Usually, algorithms try to compute drawings with small area or total edge length while preserving the underlying orthogonal shape. We suggest a moderate relaxation of the orthogonal compaction problem, namely the one-dimensional monotone flexible edge compaction
[...] Read more.
Compacting orthogonal drawings is a challenging task. Usually, algorithms try to compute drawings with small area or total edge length while preserving the underlying orthogonal shape. We suggest a moderate relaxation of the orthogonal compaction problem, namely the one-dimensional monotone flexible edge compaction problem with fixed vertex star geometry. We further show that this problem can be solved in polynomial time using a network flow model. An experimental evaluation shows that by allowing additional bends could reduce the total edge length and the drawing area. Full article
(This article belongs to the Special Issue Selected Papers from IVAPP 2018)
Figures

Figure 1

Open AccessFeature PaperArticle Engineering Cheerful Robots: An Ethical Consideration
Information 2018, 9(7), 152; https://doi.org/10.3390/info9070152
Received: 16 May 2018 / Revised: 10 June 2018 / Accepted: 22 June 2018 / Published: 24 June 2018
PDF Full-text (220 KB) | HTML Full-text | XML Full-text
Abstract
Socially interactive robots in a variety of forms and function are quickly becoming part of everyday life and bring with them a host of applied ethical issues. This paper concerns meta-ethical implications at the interface among robotics, ethics, psychology, and the social sciences.
[...] Read more.
Socially interactive robots in a variety of forms and function are quickly becoming part of everyday life and bring with them a host of applied ethical issues. This paper concerns meta-ethical implications at the interface among robotics, ethics, psychology, and the social sciences. While guidelines for the ethical design and use of robots are necessary and urgent, meeting this exigency opens up the issue of whose values and vision of the ideal society inform public policies. The paper is organized as a sequence of questions: Can robots be agents of cultural transmission? Is a cultural shift an issue for roboethics? Should roboethics be an instrument of (political) social engineering? How could biases of the technological imagination be avoided? Does technological determinism compromise the possibility of moral action? The answers to these questions are not straightforwardly affirmative or negative, but their contemplation leads to heeding C. Wright Mills’ metaphor of the cheerful robot. Full article
(This article belongs to the Special Issue ROBOETHICS)
Open AccessArticle A Community Network Ontology for Participatory Collaboration Mapping: Towards Collective Impact
Information 2018, 9(7), 151; https://doi.org/10.3390/info9070151
Received: 19 March 2018 / Revised: 11 June 2018 / Accepted: 16 June 2018 / Published: 22 June 2018
PDF Full-text (4644 KB) | HTML Full-text | XML Full-text
Abstract
Addressing societal wicked problems requires collaboration across many different community networks. In order for community networks to scale up their collaboration and increase their collective impact, they require a process of inter-communal sensemaking. One way to catalyze that process is by participatory collaboration
[...] Read more.
Addressing societal wicked problems requires collaboration across many different community networks. In order for community networks to scale up their collaboration and increase their collective impact, they require a process of inter-communal sensemaking. One way to catalyze that process is by participatory collaboration mapping. In earlier work, we presented the CommunitySensor methodology for participatory mapping and sensemaking within communities. In this article, we extend this approach by introducing a community network ontology that can be used to define a customized mapping language to make sense across communities. We explore what ontologies are and how our community network ontology is developed using a participatory ontology evolution approach. We present the community network conceptual model at the heart of the ontology. We show how it classifies element and connection types derived from an analysis of 17 participatory mapping cases, and how this classification can be used in characterizing and tailoring the mapping language required by a specific community network. To illustrate the application of the community network ontology in practice, we apply it to a case of participatory collaboration mapping for global and national agricultural field building. We end the article with a discussion and conclusions. Full article
Figures

Figure 1

Open AccessArticle Rack Aware Data Placement for Network Consumption in Erasure-Coded Clustered Storage Systems
Information 2018, 9(7), 150; https://doi.org/10.3390/info9070150
Received: 30 April 2018 / Revised: 15 June 2018 / Accepted: 16 June 2018 / Published: 21 June 2018
PDF Full-text (2883 KB) | HTML Full-text | XML Full-text
Abstract
The amount of encoded data replication in an erasure-coded clustered storage system has a great impact on the bandwidth consumption and network latency, mostly during data reconstruction. Aimed at the reasons that lead to the excess data transmission between racks, a rack aware
[...] Read more.
The amount of encoded data replication in an erasure-coded clustered storage system has a great impact on the bandwidth consumption and network latency, mostly during data reconstruction. Aimed at the reasons that lead to the excess data transmission between racks, a rack aware data block placement method is proposed. In order to ensure rack-level fault tolerance and reduce the frequency and amount of the cross-rack data transmission during data reconstruction, the method deploys partial data block concentration to store the data blocks of a file in fewer racks. Theoretical analysis and simulation results show that our proposed strategy greatly reduces the frequency and data volume of the cross-rack transmission during data reconstruction. At the same time, it has better performance than the typical random distribution method in terms of network usage and data reconstruction efficiency. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Effective Intrusion Detection System Using XGBoost
Information 2018, 9(7), 149; https://doi.org/10.3390/info9070149
Received: 21 May 2018 / Revised: 15 June 2018 / Accepted: 19 June 2018 / Published: 21 June 2018
PDF Full-text (2233 KB) | HTML Full-text | XML Full-text
Abstract
As the world is on the verge of venturing into fifth-generation communication technology and embracing concepts such as virtualization and cloudification, the most crucial aspect remains “security”, as more and more data get attached to the internet. This paper reflects a model designed
[...] Read more.
As the world is on the verge of venturing into fifth-generation communication technology and embracing concepts such as virtualization and cloudification, the most crucial aspect remains “security”, as more and more data get attached to the internet. This paper reflects a model designed to measure the various parameters of data in a network such as accuracy, precision, confusion matrix, and others. XGBoost is employed on the NSL-KDD (network socket layer-knowledge discovery in databases) dataset to get the desired results. The whole motive is to learn about the integrity of data and have a higher accuracy in the prediction of data. By doing so, the amount of mischievous data floating in a network can be minimized, making the network a secure place to share information. The more secure a network is, the fewer situations where data is hacked or modified. By changing various parameters of the model, future research can be done to get the most out of the data entering and leaving a network. The most important player in the network is data, and getting to know it more closely and precisely is half the work done. Studying data in a network and analyzing the pattern and volume of data leads to the emergence of a solid Intrusion Detection System (IDS), that keeps the network healthy and a safe place to share confidential information. Full article
(This article belongs to the Special Issue Ambient Intelligence Environments)
Figures

Figure 1

Back to Top