Next Issue
Volume 9, August
Previous Issue
Volume 9, June
 
 

Information, Volume 9, Issue 7 (July 2018) – 35 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 300 KiB  
Review
AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is “Yes”)
by Andrey Miroshnichenko
Information 2018, 9(7), 183; https://doi.org/10.3390/info9070183 - 23 Jul 2018
Cited by 25 | Viewed by 18956
Abstract
This paper explores a practical application of a weak, or narrow, artificial intelligence (AI) in the news media. Journalism is a creative human practice. This, according to widespread opinion, makes it harder for robots to replicate. However, writing algorithms are already widely used [...] Read more.
This paper explores a practical application of a weak, or narrow, artificial intelligence (AI) in the news media. Journalism is a creative human practice. This, according to widespread opinion, makes it harder for robots to replicate. However, writing algorithms are already widely used in the news media to produce articles and thereby replace human journalists. In 2016, Wordsmith, one of the two most powerful news-writing algorithms, wrote and published 1.5 billion news stories. This number is comparable to or may even exceed work written and published by human journalists. Robo-journalists’ skills and competencies are constantly growing. Research has shown that readers sometimes cannot differentiate between news written by robots or by humans; more importantly, readers often make little of such distinctions. Considering this, these forms of AI can be seen as having already passed a kind of Turing test as applied to journalism. The paper provides a review of the current state of robo-journalism; analyses popular arguments about “robots’ incapability” to prevail over humans in creative practices; and offers a foresight of the possible further development of robo-journalism and its collision with organic forms of journalism. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
24 pages, 515 KiB  
Article
A Review and Classification of Assisted Living Systems
by Caroline A. Byrne, Rem Collier and Gregory M. P. O’Hare
Information 2018, 9(7), 182; https://doi.org/10.3390/info9070182 - 21 Jul 2018
Cited by 23 | Viewed by 6302
Abstract
Europe’s social agenda for the “active elderly” is based upon a series of programs that provide a flexible infrastructure for their lives so that they are motivated, engaged in lifelong learning, and contributing to society. Economically speaking, Europe must engage in active aging [...] Read more.
Europe’s social agenda for the “active elderly” is based upon a series of programs that provide a flexible infrastructure for their lives so that they are motivated, engaged in lifelong learning, and contributing to society. Economically speaking, Europe must engage in active aging research in order to avoid unsustainable health costs, and ambient assisted living (AAL) systems provide a platform for the elderly to remain living independently. This paper reviews research conducted within the area of AAL, and offers a taxonomy within which such systems may be classified. This classification distinguishes itself from others in that it categorises AAL systems in a top-down fashion, with the most important categories placed immediately to the left. In this paper, each section is explored further, and AAL systems are the focus. Entire AAL systems still cannot be fully evaluated, but their constituent technical parts can be assessed. The activities of daily living (ADLs) component was given further priority due to its potential for system evaluation, based on its ability to recognise ADLs with reasonable accuracy. Full article
(This article belongs to the Special Issue Ambient Intelligence Environments)
Show Figures

Figure 1

18 pages, 5727 KiB  
Article
An Efficient Task Autonomous Planning Method for Small Satellites
by Jun Long, Cong Li, Lei Zhu, Shilong Chen and Junfeng Liu
Information 2018, 9(7), 181; https://doi.org/10.3390/info9070181 - 20 Jul 2018
Cited by 1 | Viewed by 3227
Abstract
Existing on-board planning systems do not apply to small satellites with limited onboard computer capacity and on-board resources. This study aims to investigate the problem of autonomous task planning for small satellites. Based on the analysis of the problem and its constraints, a [...] Read more.
Existing on-board planning systems do not apply to small satellites with limited onboard computer capacity and on-board resources. This study aims to investigate the problem of autonomous task planning for small satellites. Based on the analysis of the problem and its constraints, a model of task autonomous planning was implemented. According to the long-cycle task planning requirements, a framework of rolling planning was proposed, including a rolling window and planning unit in the solution, and we proposed an improved genetic algorithm (IGA) for rolling planning. This algorithm categorized each individual based on the compliance of individuals with a time partial order constraint and resource constraint, and designed an appropriate crossover operator and mutation operator for each type of individual. The experimental result showed that the framework and algorithm can not only respond quickly to observation tasks, but can produce effective planning programs to ensure the successful completion of observation tasks. Full article
Show Figures

Figure 1

24 pages, 4208 KiB  
Article
Apple Image Recognition Multi-Objective Method Based on the Adaptive Harmony Search Algorithm with Simulation and Creation
by Liqun Liu and Jiuyuan Huo
Information 2018, 9(7), 180; https://doi.org/10.3390/info9070180 - 20 Jul 2018
Cited by 5 | Viewed by 3157
Abstract
Aiming at the low recognition effect of apple images captured in a natural scene, and the problem that the OTSU algorithm has a single threshold, lack of adaptability, easily caused noise interference, and over-segmentation, an apple image recognition multi-objective method based on the [...] Read more.
Aiming at the low recognition effect of apple images captured in a natural scene, and the problem that the OTSU algorithm has a single threshold, lack of adaptability, easily caused noise interference, and over-segmentation, an apple image recognition multi-objective method based on the adaptive harmony search algorithm with simulation and creation is proposed in this paper. The new adaptive harmony search algorithm with simulation and creation expands the search space to maintain the diversity of the solution and accelerates the convergence of the algorithm. In the search process, the harmony tone simulation operator is used to make each harmony tone evolve towards the optimal harmony individual direction to ensure the global search ability of the algorithm. Despite no improvement in the evolution, the harmony tone creation operator is used to make each harmony tone to stay away from the current optimal harmony individual for extending the search space to maintain the diversity of solutions. The adaptive factor of the harmony tone was used to restrain random searching of the two operators to accelerate the convergence ability of the algorithm. The multi-objective optimization recognition method transforms the apple image recognition problem collected in the natural scene into a multi-objective optimization problem, and uses the new adaptive harmony search algorithm with simulation and creation as the image threshold search strategy. The maximum class variance and maximum entropy are chosen as the objective functions of the multi-objective optimization problem. Compared with HS, HIS, GHS, and SGHS algorithms, the experimental results showed that the improved algorithm has higher a convergence speed and accuracy, and maintains optimal performance in high-dimensional, large-scale harmony memory. The proposed multi-objective optimization recognition method obtains a set of non-dominated threshold solution sets, which is more flexible than the OTSU algorithm in the opportunity of threshold selection. The selected threshold has better adaptive characteristics and has good image segmentation results. Full article
Show Figures

Figure 1

10 pages, 2966 KiB  
Article
Knowledge Acquisition from Critical Annotations
by Tamás Mészáros and Margit Kiss
Information 2018, 9(7), 179; https://doi.org/10.3390/info9070179 - 20 Jul 2018
Cited by 1 | Viewed by 3266
Abstract
Critical annotations are important knowledge sources when researching one’s oeuvre. They describe literary, historical, cultural, linguistic and other kinds of information written in natural languages. Acquiring knowledge from these notes is a complex task due to the limited natural language understanding capability of [...] Read more.
Critical annotations are important knowledge sources when researching one’s oeuvre. They describe literary, historical, cultural, linguistic and other kinds of information written in natural languages. Acquiring knowledge from these notes is a complex task due to the limited natural language understanding capability of computerized tools. The aim of the research was to extract knowledge from existing annotations, and to develop new authoring methods to facilitate the knowledge acquisition. After structural and semantic analysis of critical annotations, authors developed a software tool that transforms existing annotations into a structured form that encodes referral and factual knowledge. Authors also propose a new method for authoring annotations based on controlled natural languages. This method ensures that annotations are semantically processable by computer programs and the authoring process remains simple for non-technical users. Full article
(This article belongs to the Special Issue AI for Digital Humanities)
Show Figures

Figure 1

17 pages, 1297 KiB  
Article
Visual Saliency Based Just Noticeable Difference Estimation in DWT Domain
by Chunxing Wang, Xiaoyue Han, Wenbo Wan, Jing Li, Jiande Sun and Meiling Xu
Information 2018, 9(7), 178; https://doi.org/10.3390/info9070178 - 20 Jul 2018
Cited by 5 | Viewed by 3595
Abstract
It has been known that human visual systems (HVSs) can be applied to describe the underlying masking properties for the image processing. In general, HVS can only perceive small changes in a scene when they are greater than the just noticeable distortion (JND) [...] Read more.
It has been known that human visual systems (HVSs) can be applied to describe the underlying masking properties for the image processing. In general, HVS can only perceive small changes in a scene when they are greater than the just noticeable distortion (JND) threshold. Recently, the cognitive resources of huma visual attention mechanisms are limited, which can not concentrate on all stimuli. To be specific, only more important stimuli will react from the mechanisms. When it comes to visual attention mechanisms, we need to introduce the visual saliency to model the human perception more accurately. In this paper, we presents a new wavelet-based JND estimation method that takes into account the interrelationship between visual saliency and JND threshold. In the experimental part, we verify it from both subjective and objective aspects. In addition, the experimental results show that extracting the saliency map of the image in the discrete wavelet transform (DWT) domain and then modulating its JND threshold is better than the non-modulated JND effect. Full article
Show Figures

Figure 1

13 pages, 1667 KiB  
Article
A Hybrid Model for Monthly Precipitation Time Series Forecasting Based on Variational Mode Decomposition with Extreme Learning Machine
by Guohui Li, Xiao Ma and Hong Yang
Information 2018, 9(7), 177; https://doi.org/10.3390/info9070177 - 20 Jul 2018
Cited by 25 | Viewed by 4712
Abstract
The matter of success in forecasting precipitation is of great significance to flood control and drought relief, and water resources planning and management. For the nonlinear problem in forecasting precipitation time series, a hybrid prediction model based on variational mode decomposition (VMD) coupled [...] Read more.
The matter of success in forecasting precipitation is of great significance to flood control and drought relief, and water resources planning and management. For the nonlinear problem in forecasting precipitation time series, a hybrid prediction model based on variational mode decomposition (VMD) coupled with extreme learning machine (ELM) is proposed to reduce the difficulty in modeling monthly precipitation forecasting and improve the prediction accuracy. The monthly precipitation data in the past 60 years from Yan’an City and Huashan Mountain, Shaanxi Province, are used as cases to test this new hybrid model. First, the nonstationary monthly precipitation time series are decomposed into several relatively stable intrinsic mode functions (IMFs) by using VMD. Then, an ELM prediction model is established for each IMF. Next, the predicted values of these components are accumulated to obtain the final prediction results. Finally, three predictive indicators are adopted to measure the prediction accuracy of the proposed hybrid model, back propagation (BP) neural network, Elman neural network (Elman), ELM, and EMD-ELM models: mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE). The experimental simulation results show that the proposed hybrid model has higher prediction accuracy and can be used to predict the monthly precipitation time series. Full article
Show Figures

Figure 1

20 pages, 8187 KiB  
Article
A Study of a Health Resources Management Platform Integrating Neural Networks and DSS Telemedicine for Homecare Assistance
by Alessandro Massaro, Vincenzo Maritati, Nicola Savino, Angelo Galiano, Daniele Convertini, Emanuele De Fonte and Maurizio Di Muro
Information 2018, 9(7), 176; https://doi.org/10.3390/info9070176 - 19 Jul 2018
Cited by 30 | Viewed by 7670
Abstract
The proposed paper is related to a case of study of an e-health telemedicine system oriented on homecare assistance and suitable for de-hospitalization processes. The proposed platform is able to transfer efficiently the patient analyses from home to a control room of a [...] Read more.
The proposed paper is related to a case of study of an e-health telemedicine system oriented on homecare assistance and suitable for de-hospitalization processes. The proposed platform is able to transfer efficiently the patient analyses from home to a control room of a clinic, thus potentially reducing costs and providing high-quality assistance services. The goal is to propose an innovative resources management platform (RMP) integrating an innovative homecare decision support system (DSS) based on a multilayer perceptron (MLP) artificial neural network (ANN). The study is oriented in predictive diagnostics by proposing an RMP integrating a KNIME (Konstanz Information Miner) MLP-ANN workflow experimented on blood pressure systolic values. The workflow elaborates real data transmitted via the cloud by medical smart sensors and provides a prediction of the patient status. The innovative RMP-DSS is then structured to enable three main control levels. The first one is a real-time alerting condition triggered when real-time values exceed a threshold. The second one concerns preventative action based on the analysis of historical patient data, and the third one involves alerting due to patient status prediction. The proposed study combines the management of processes with DSS outputs, thus optimizing the homecare assistance activities. Full article
(This article belongs to the Special Issue eHealth and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 3407 KiB  
Article
Virtual Interactive Environment for Low-Cost Treatment of Mechanical Strabismus and Amblyopia
by Aratã Andrade Saraiva, Matheus Pereira Barros, Alexandre Tolstenko Nogueira, N. M. Fonseca Ferreira and Antonio Valente
Information 2018, 9(7), 175; https://doi.org/10.3390/info9070175 - 19 Jul 2018
Cited by 12 | Viewed by 6395
Abstract
This study presents a technique that uses an interactive virtual environment for the rehabilitation treatment of patients with mechanical strabismus and/or amblyopia who have lost eye movement. The relevant part of this treatment is the act of forcing the two eyes to cooperate [...] Read more.
This study presents a technique that uses an interactive virtual environment for the rehabilitation treatment of patients with mechanical strabismus and/or amblyopia who have lost eye movement. The relevant part of this treatment is the act of forcing the two eyes to cooperate with each other by increasing the level of adaptation of the brain and allowing the weak eye to see again. Accordingly, the game enables both eyes to work together, providing the patient with better visual comfort and life quality. In addition, the virtual environment is attractive and has the ability to overcome specific challenges with real-time feedback, coinciding with ideal approaches for use in ocular rehabilitation. The entire game was developed with free software and the 3D environment, which is made from low-cost virtual reality glasses, as well as Google Cardboard which uses a smartphone for the display of the game. The method presented was tested in 41 male and female patients, aged 8 to 39 years, and resulted in the success of 40 patients. The method proved to be feasible and accessible as a tool for the treatment of amblyopia and strabismus. The project was registered in the Brazil platform and approved by the ethics committee of the State University of Piaui—UESPI, with the CAAE identification code: 37802114.8.0000.5209. Full article
(This article belongs to the Special Issue Serious Games and Applications for Health (SeGAH 2018))
Show Figures

Figure 1

20 pages, 1263 KiB  
Article
LOD for Data Warehouses: Managing the Ecosystem Co-Evolution
by Selma Khouri and Ladjel Bellatreche
Information 2018, 9(7), 174; https://doi.org/10.3390/info9070174 - 17 Jul 2018
Cited by 3 | Viewed by 4063
Abstract
For more than 30 years, data warehouses (DWs) have attracted particular interest both in practice and in research. This success is explained by their ability to adapt to their evolving environment. One of the last challenges for DWs is their [...] Read more.
For more than 30 years, data warehouses (DWs) have attracted particular interest both in practice and in research. This success is explained by their ability to adapt to their evolving environment. One of the last challenges for DWs is their ability to open their frontiers to external data sources in addition to internal sources. The development of linked open data (LOD) as external sources is an excellent opportunity to create added value and enrich the analytical capabilities of DWs. However, the incorporation of LOD in the DW must be accompanied by careful management. In this paper, we are interested in managing the evolution of DW systems integrating internal and external LOD datasets. The particularity of LOD is that they contribute to evolving the DW at several levels: (i) source level, (ii) DW schema level, and (iii) DW design-cycle constructs. In this context, we have to ensure this co-evolution, as conventional evolution approaches are adapted neither to this new kind of source nor to semantic constructs underlying LOD sources. One way of tackling this co-evolution issue is to ensure the traceability of DW constructs for the whole design cycle. Our approach is tested using: the LUBM (Lehigh University BenchMark), different LOD datasets (DBepedia, YAGO, etc.), and Oracle 12c database management system (DBMS) used for the DW deployment. Full article
(This article belongs to the Special Issue Semantics for Big Data Integration)
Show Figures

Figure 1

15 pages, 2077 KiB  
Article
Adaptive Multiswarm Comprehensive Learning Particle Swarm Optimization
by Xiang Yu and Claudio Estevez
Information 2018, 9(7), 173; https://doi.org/10.3390/info9070173 - 15 Jul 2018
Cited by 8 | Viewed by 4257
Abstract
Multiswarm comprehensive learning particle swarm optimization (MSCLPSO) is a multiobjective metaheuristic recently proposed by the authors. MSCLPSO uses multiple swarms of particles and externally stores elitists that are nondominated solutions found so far. MSCLPSO can approximate the true Pareto front in one single [...] Read more.
Multiswarm comprehensive learning particle swarm optimization (MSCLPSO) is a multiobjective metaheuristic recently proposed by the authors. MSCLPSO uses multiple swarms of particles and externally stores elitists that are nondominated solutions found so far. MSCLPSO can approximate the true Pareto front in one single run; however, it requires a large number of generations to converge, because each swarm only optimizes the associated objective and does not learn from any search experience outside the swarm. In this paper, we propose an adaptive particle velocity update strategy for MSCLPSO to improve the search efficiency. Based on whether the elitists are indifferent or complex on each dimension, each particle adaptively determines whether to just learn from some particle in the same swarm, or additionally from the difference of some pair of elitists for the velocity update on that dimension, trying to achieve a tradeoff between optimizing the associated objective and exploring diverse regions of the Pareto set. Experimental results on various two-objective and three-objective benchmark optimization problems with different dimensional complexity characteristics demonstrate that the adaptive particle velocity update strategy improves the search performance of MSCLPSO significantly and is able to help MSCLPSO locate the true Pareto front more quickly and obtain better distributed nondominated solutions over the entire Pareto front. Full article
Show Figures

Figure 1

12 pages, 281 KiB  
Article
Near-Extremal Type I Self-Dual Codes with Minimal Shadow over GF(2) and GF(4)
by Sunghyu Han
Information 2018, 9(7), 172; https://doi.org/10.3390/info9070172 - 13 Jul 2018
Cited by 1 | Viewed by 2407
Abstract
Binary self-dual codes and additive self-dual codes over GF(4) contain common points. Both have Type I codes and Type II codes, as well as shadow codes. In this paper, we provide a comprehensive description of extremal and near-extremal Type [...] Read more.
Binary self-dual codes and additive self-dual codes over GF(4) contain common points. Both have Type I codes and Type II codes, as well as shadow codes. In this paper, we provide a comprehensive description of extremal and near-extremal Type I codes over GF(2) and GF(4) with minimal shadow. In particular, we prove that there is no near-extremal Type I [24m,12m,2m+2] binary self-dual code with minimal shadow if m323, and we prove that there is no near-extremal Type I (6m+1,26m+1,2m+1) additive self-dual code over GF(4) with minimal shadow if m22. Full article
(This article belongs to the Section Information Theory and Methodology)
2 pages, 143 KiB  
Editorial
Special Issue on Selected Papers from IVAPP 2018
by Alexandru Telea and Andreas Kerren
Information 2018, 9(7), 171; https://doi.org/10.3390/info9070171 - 13 Jul 2018
Viewed by 2400
Abstract
Recent developments at the crossroads of data science, datamining,machine learning, and graphics and imaging sciences have further established information visualization and visual analytics as central disciplines that deliver methods, techniques, and tools for making sense of and extracting actionable insights and results fromlarge [...] Read more.
Recent developments at the crossroads of data science, datamining,machine learning, and graphics and imaging sciences have further established information visualization and visual analytics as central disciplines that deliver methods, techniques, and tools for making sense of and extracting actionable insights and results fromlarge amounts of complex,multidimensional, hybrid, and time-dependent data.[...] Full article
(This article belongs to the Special Issue Selected Papers from IVAPP 2018)
19 pages, 2479 KiB  
Article
A Deploying Method for Predicting the Size and Optimizing the Location of an Electric Vehicle Charging Stations
by Jian Ma and Liyan Zhang
Information 2018, 9(7), 170; https://doi.org/10.3390/info9070170 - 13 Jul 2018
Cited by 13 | Viewed by 4210
Abstract
With the depletion of oil resources and the aggravation of environmental pollution, electric vehicles have a growing future and will be more popular as the main force of new energy consumption. They have attracted greater attention from various countries. The sizing and location [...] Read more.
With the depletion of oil resources and the aggravation of environmental pollution, electric vehicles have a growing future and will be more popular as the main force of new energy consumption. They have attracted greater attention from various countries. The sizing and location problem for charging stations has been a hot point of global research, and these issues are important for government planning for electric vehicles. In this paper, we first built a BASS model to predict the total number of electric vehicles and calculate the size of charging stations in the coming years. Moreover, we also developed a queuing model to optimize the location of charging stations and solved this issue by using the exhaustion method, which regards minimum cost as the objective function. After that, the model was tested using data from a city in China. The results show that the model in this paper is good at predicting the number of electric vehicles in the coming years and calculating the size of charging stations. At the same time, it can also optimize the distribution of charging stations and make them more balanceable. Thus, this model is beneficial for the government in planning the development of electric vehicles in the future. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

18 pages, 8560 KiB  
Article
Aerial-Image Denoising Based on Convolutional Neural Network with Multi-Scale Residual Learning Approach
by Chong Chen and Zengbo Xu
Information 2018, 9(7), 169; https://doi.org/10.3390/info9070169 - 09 Jul 2018
Cited by 16 | Viewed by 4745
Abstract
Aerial images are subject to various types of noise, which restricts the recognition and analysis of images, target monitoring, and search services. At present, deep learning is successful in image recognition. However, traditional convolutional neural networks (CNNs) extract the main features of an [...] Read more.
Aerial images are subject to various types of noise, which restricts the recognition and analysis of images, target monitoring, and search services. At present, deep learning is successful in image recognition. However, traditional convolutional neural networks (CNNs) extract the main features of an image to predict directly and are limited by the requirements of the training sample size (i.e., a small data size is not successful enough). In this paper, using a small sample size, we propose an aerial-image denoising recognition model based on CNNs with a multi-scale residual learning approach. The proposed model has the following three advantages: (1) Instead of directly learning latent clean images, the proposed model learns the noise from noisy images and then subtracts the learned residual from the noisy images to obtain reconstructed (denoised) images; (2) The developed image denoising recognition model is beneficial to small training datasets; (3) We use multi-scale residual learning as a learning approach, and dropout is introduced into the model architecture to force the network to learn to generalize well enough. Our experimental results on aerial-image denoising recognition reveal that the proposed approach is highly superior to the other state-of-the-art methods. Full article
Show Figures

Figure 1

43 pages, 549 KiB  
Article
Fundamentals of Natural Representation
by Rajiv K. Singh
Information 2018, 9(7), 168; https://doi.org/10.3390/info9070168 - 09 Jul 2018
Cited by 2 | Viewed by 4140
Abstract
Our understanding of the natural universe is far from being comprehensive. The following questions bring to the fore some of the fundamental issues. Is there a reality of information associated with the states of matter based entirely on natural causation? If so, then [...] Read more.
Our understanding of the natural universe is far from being comprehensive. The following questions bring to the fore some of the fundamental issues. Is there a reality of information associated with the states of matter based entirely on natural causation? If so, then what constitutes the mechanism of information exchange (processing) at each interaction of physical entities? Let the association of information with a state of matter be referred to as the representation of semantic value expressed by the information. We ask, can the semantic value be quantified, described, and operated upon with symbols, as mathematical symbols describe the material world? In this work, these questions are dealt with substantively to establish the fundamental principles of the mechanisms of representation and propagation of information with every physical interaction. A quantitative method of information processing is derived from the first principles to show how high level structured and abstract semantics may arise via physical interactions alone, without a need for an intelligent interpreter. It is further shown that the natural representation constitutes a basis for the description, and therefore, for comprehension, of all natural phenomena, creating a more holistic view of nature. A brief discussion underscores the natural information processing as the foundation for the genesis of language and mathematics. In addition to the derivation of theoretical basis from established observations, the method of information processing is further demonstrated by a computer simulation. Full article
Show Figures

Figure 1

30 pages, 7786 KiB  
Article
An Improved Genetic Algorithm with a New Initialization Mechanism Based on Regression Techniques
by Ahmad B. Hassanat, V. B. Surya Prasath, Mohammed Ali Abbadi, Salam Amer Abu-Qdari and Hossam Faris
Information 2018, 9(7), 167; https://doi.org/10.3390/info9070167 - 07 Jul 2018
Cited by 45 | Viewed by 6776
Abstract
Genetic algorithm (GA) is one of the well-known techniques from the area of evolutionary computation that plays a significant role in obtaining meaningful solutions to complex problems with large search space. GAs involve three fundamental operations after creating an initial population, namely selection, [...] Read more.
Genetic algorithm (GA) is one of the well-known techniques from the area of evolutionary computation that plays a significant role in obtaining meaningful solutions to complex problems with large search space. GAs involve three fundamental operations after creating an initial population, namely selection, crossover, and mutation. The first task in GAs is to create an appropriate initial population. Traditionally GAs with randomly selected population is widely used as it is simple and efficient; however, the generated population may contain poor fitness. Low quality or poor fitness of individuals may lead to take long time to converge to an optimal (or near-optimal) solution. Therefore, the fitness or quality of initial population of individuals plays a significant role in determining an optimal or near-optimal solution. In this work, we propose a new method for the initial population seeding based on linear regression analysis of the problem tackled by the GA; in this paper, the traveling salesman problem (TSP). The proposed Regression-based technique divides a given large scale TSP problem into smaller sub-problems. This is done using the regression line and its perpendicular line, which allow for clustering the cities into four sub-problems repeatedly, the location of each city determines which category/cluster the city belongs to, the algorithm works repeatedly until the size of the subproblem becomes very small, four cities or less for instance, these cities are more likely neighboring each other, so connecting them to each other creates a somehow good solution to start with, this solution is mutated several times to form the initial population. We analyze the performance of the GA when using traditional population seeding techniques, such as the random and nearest neighbors, along with the proposed regression-based technique. The experiments are carried out using some of the well-known TSP instances obtained from the TSPLIB, which is the standard library for TSP problems. Quantitative analysis is carried out using the statistical test tools: analysis of variance (ANOVA), Duncan multiple range test (DMRT), and least significant difference (LSD). The experimental results show that the performance of the GA that uses the proposed regression-based technique for population seeding outperforms other GAs that uses traditional population seeding techniques such as the random and the nearest neighbor based techniques in terms of error rate, and average convergence. Full article
Show Figures

Graphical abstract

16 pages, 320 KiB  
Article
Analysis of Conversation Competencies in Strategic Alignment between Business Areas (External Control) and Information Technology Areas in a Control Body
by Alberto Leite Câmara, Rejane Maria da Costa Figueiredo and Edna Dias Canedo
Information 2018, 9(7), 166; https://doi.org/10.3390/info9070166 - 07 Jul 2018
Cited by 3 | Viewed by 3550
Abstract
The process of governance in the domain of Information and Communication Technologies (ICT) has been the subject of many studies in recent years, especially as regards the strategic alignment between the business and ICT areas. However, only a handful of those studies focused [...] Read more.
The process of governance in the domain of Information and Communication Technologies (ICT) has been the subject of many studies in recent years, especially as regards the strategic alignment between the business and ICT areas. However, only a handful of those studies focused on studying the relationships that exist between these areas, specifically the conversation competencies that so strongly influence their alignment. This study sought to investigate and analyze the gaps that exist in such conversation competencies, as found in a Brazilian Control Body, according to the perceptions of the officers in the business and ICT areas. The survey tool used here was a questionnaire, sent to all the officers of the Body’s areas, the construction of which was based on the conversation competencies. It was found that there were 28 gaps in the conversation competencies of the Brazilian Control Body that may be developed to improve the alignment of the business and ICT areas. As regards the paths for future work, a recommendation is made for the creation of a research tool that allows the verification of the percentage of alignment that exists between ICT services and the business requirements, as its application. Full article
Show Figures

Figure 1

17 pages, 2842 KiB  
Article
Long-Short-Term Memory Network Based Hybrid Model for Short-Term Electrical Load Forecasting
by Liwen Xu, Chengdong Li, Xiuying Xie and Guiqing Zhang
Information 2018, 9(7), 165; https://doi.org/10.3390/info9070165 - 07 Jul 2018
Cited by 28 | Viewed by 4195
Abstract
Short-term electrical load forecasting is of great significance to the safe operation, efficient management, and reasonable scheduling of the power grid. However, the electrical load can be affected by different kinds of external disturbances, thus, there exist high levels of uncertainties in the [...] Read more.
Short-term electrical load forecasting is of great significance to the safe operation, efficient management, and reasonable scheduling of the power grid. However, the electrical load can be affected by different kinds of external disturbances, thus, there exist high levels of uncertainties in the electrical load time series data. As a result, it is a challenging task to obtain accurate forecasting of the short-term electrical load. In order to further improve the forecasting accuracy, this study combines the data-driven long-short-term memory network (LSTM) and extreme learning machine (ELM) to present a hybrid model-based forecasting method for the prediction of short-term electrical loads. In this hybrid model, the LSTM is adopted to extract the deep features of the electrical load while the ELM is used to model the shallow patterns. In order to generate the final forecasting result, the predicted results of the LSTM and ELM are ensembled by the linear regression method. Finally, the proposed method is applied to two real-world electrical load forecasting problems, and detailed experiments are conducted. In order to verify the superiority and advantages of the proposed hybrid model, it is compared with the LSTM model, the ELM model, and the support vector regression (SVR). Experimental and comparison results demonstrate that the proposed hybrid model can give satisfactory performance and can achieve much better performance than the comparative methods in this short-term electrical load forecasting application. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

15 pages, 1534 KiB  
Article
Linking Open Descriptions of Social Events (LODSE): A New Ontology for Social Event Classification
by Marcelo Rodrigues, Rodrigo Rocha Silva and Jorge Bernardino
Information 2018, 9(7), 164; https://doi.org/10.3390/info9070164 - 04 Jul 2018
Cited by 10 | Viewed by 4256
Abstract
The digital era has brought a number of significant changes in the world of communications. Although technological evolution has allowed the creation of new social event platforms to disclose events, it is still difficult to know what is happening around a location. Currently, [...] Read more.
The digital era has brought a number of significant changes in the world of communications. Although technological evolution has allowed the creation of new social event platforms to disclose events, it is still difficult to know what is happening around a location. Currently, a large number of social events are created and promoted on social networks. With the massive quantity of information created in these systems, finding an event is challenging because sometimes the data is ambiguous or incomplete. One of the main challenges in social event classification is related to the incompleteness and ambiguity of metadata created by users. This paper presents a new ontology, named LODSE (Linking Open Descriptions of Social Events) based on the LODE (Linking Open Descriptions of Events) ontology to describe the domain model of social events. The aim of this ontology is to create a data model that allows definition of the most important properties to describe a social event and to improve the classification of events. The proposed data model is used in an experimental evaluation to compare both ontologies in social event classification. The experimental evaluation, using a dataset based on real data from a popular social network, demonstrated that the data model based on the LODSE ontology brings several benefits in the classification of events. Using the LODSE ontology, the results show an increment of correctly classified events as well as a gain in execution time, when comparing with the data model based on the LODE ontology. Full article
Show Figures

Figure 1

24 pages, 5493 KiB  
Article
A Top-Down Interactive Visual Analysis Approach for Physical Simulation Ensembles at Different Aggregation Levels
by Alexey Fofonov and Lars Linsen
Information 2018, 9(7), 163; https://doi.org/10.3390/info9070163 - 03 Jul 2018
Cited by 2 | Viewed by 3795
Abstract
Physical simulations aim at modeling and computing spatio-temporal phenomena. As the simulations depend on initial conditions and/or parameter settings whose impact is to be investigated, a larger number of simulation runs is commonly executed. Analyzing all facets of such multi-run multi-field spatio-temporal simulation [...] Read more.
Physical simulations aim at modeling and computing spatio-temporal phenomena. As the simulations depend on initial conditions and/or parameter settings whose impact is to be investigated, a larger number of simulation runs is commonly executed. Analyzing all facets of such multi-run multi-field spatio-temporal simulation data poses a challenge for visualization. It requires the design of different visual encodings that aggregate information in multiple ways and at multiple abstraction levels. We present a top-down interactive visual analysis tool of multi-run data from physical simulations named MultiVisA that is based on plots at different aggregation levels. The most aggregated visual representation is a histogram-based plot that allows for the investigation of the distribution of function values within all simulation runs. When expanding over time, a density-based time-series plot allows for the detection of temporal patterns and outliers within the ensemble of multiple runs for single and multiple fields. Finally, not aggregating over runs in a similarity-based plot allows for the comparison of multiple or individual runs and their behavior over time. Coordinated views allow for linking the plots of the three aggregation levels to spatial visualizations in physical space. We apply MultiVisA to physical simulations from the field of climate research and astrophysics. We document the analysis process, demonstrate its effectiveness, and provide evaluations involving domain experts. Full article
(This article belongs to the Special Issue Selected Papers from IVAPP 2018)
Show Figures

Figure 1

18 pages, 892 KiB  
Article
A Green Supplier Assessment Method for Manufacturing Enterprises Based on Rough ANP and Evidence Theory
by Lianhui Li and Hongguang Wang
Information 2018, 9(7), 162; https://doi.org/10.3390/info9070162 - 02 Jul 2018
Cited by 7 | Viewed by 4008
Abstract
Within the context of increasingly serious global environmental problems, green supplier assessment has become one of the key links in modern green supply chain management. In the actual work of green supplier assessment, the information of potential suppliers is often ambiguous or even [...] Read more.
Within the context of increasingly serious global environmental problems, green supplier assessment has become one of the key links in modern green supply chain management. In the actual work of green supplier assessment, the information of potential suppliers is often ambiguous or even absent, and there are interrelationships and feedback-like effects among assessment indexes. Additionally, the thinking of experts in index importance judgment is always ambiguous and subjective. To handle the uncertainty and incompleteness in green supplier assessment, we propose a green supplier assessment method based on rough ANP and evidence theory. The uncertain index value is processed by membership degree. Trapezoidal fuzzy number is adopted to express experts’ judgment on the relative importance of the indexes, and rough boundary interval is used to integrate the judgment opinions of multiple experts. The ANP structure is built to deal with the interrelationship and feedback-like effects among indexes. Then, the index weight is calculated by ANP method. Finally, the green suppliers are assessed by a trust interval, based on evidence theory. The feasibility and effectiveness of the proposed method is verified by an application of a bearing cage supplier assessment. Full article
Show Figures

Figure 1

16 pages, 3462 KiB  
Article
AAC Double Compression Audio Detection Algorithm Based on the Difference of Scale Factor
by Qijuan Huang, Rangding Wang, Diqun Yan and Jian Zhang
Information 2018, 9(7), 161; https://doi.org/10.3390/info9070161 - 02 Jul 2018
Cited by 6 | Viewed by 4205
Abstract
Audio dual compression detection is an important part of audio forensics. It is of great significance to judge whether the audio has been falsified and forged. This study found that the advanced audio coding (AAC) audio scale factor gradually decreases with the number [...] Read more.
Audio dual compression detection is an important part of audio forensics. It is of great significance to judge whether the audio has been falsified and forged. This study found that the advanced audio coding (AAC) audio scale factor gradually decreases with the number of compressions increases. Based on this, we propose an AAC double compression audio detection algorithm based on the statistical characteristics of the scale factor difference before and after audio re-compression. The experimental results show that the algorithm can accurately classify dual compressed AAC audio. The average accuracy of AAC audio classification between low-bit-rate transcoding to high-bit-rate is 99.91%, and the accuracy rate between the same bit rate is 97.98%. In addition, experiments with different durations, different noises, and different encoders also proved the better performance of this algorithm. Full article
Show Figures

Figure 1

12 pages, 5106 KiB  
Article
Using the Logistic Coupled Map for Public Key Cryptography under a Distributed Dynamics Encryption Scheme
by Hugo Solís-Sánchez and E. Gabriela Barrantes
Information 2018, 9(7), 160; https://doi.org/10.3390/info9070160 - 02 Jul 2018
Cited by 5 | Viewed by 3913
Abstract
Nowadays, there is a high necessity to create new and robust cryptosystems. Dynamical systems have promised to develop crypto-systems due to the close relationship between them and the cryptographic requirements. Distributed dynamic encryption (DDE) represents the first mathematical method to generate a public-key [...] Read more.
Nowadays, there is a high necessity to create new and robust cryptosystems. Dynamical systems have promised to develop crypto-systems due to the close relationship between them and the cryptographic requirements. Distributed dynamic encryption (DDE) represents the first mathematical method to generate a public-key cryptosystem based on chaotic dynamics. However, it has been described that the DDE proposal has a weak point in the decryption process related to efficiency and practicality. In this work, we adapted the DDE to a low-dimensional chaotic system to evaluate the weakness and security of the adaption in a realistic example. Specifically, we used a non-symmetric logistic coupled map, which is known to have multiple chaotic attractors improving the shortcomings related to the simple logistic map that manifests its inadequacy for cryptographic applications. We found a full implementation with acceptable computational cost and speed for DDE, which it is essential because it provides a key cryptographic requirement for chaos-based cryptosystems. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

12 pages, 2295 KiB  
Article
A Bloom Filter for High Dimensional Vectors
by Chunyan Shuai, Hengcheng Yang, Xin Ouyang and Zeweiyi Gong
Information 2018, 9(7), 159; https://doi.org/10.3390/info9070159 - 02 Jul 2018
Cited by 1 | Viewed by 3737
Abstract
Regardless of the type of data, traditional Bloom filters treat each element of a set as a string, and by iterating every character of the string, they discretize all data randomly and uniformly. However, with the data size and dimension increases, these variants [...] Read more.
Regardless of the type of data, traditional Bloom filters treat each element of a set as a string, and by iterating every character of the string, they discretize all data randomly and uniformly. However, with the data size and dimension increases, these variants are inefficient. To better discretize vectors with high numerical dimensions, this paper improves the string hashes to integer hashes. Based on the integer hashes and a counter array, we propose a new variant—high-dimensional bloom filter (HDBF)—to extend the Bloom filter into high-dimensional spaces, which can represent and query numerical vectors of a big set with a low false positive probability. This paper theoretically analyzes the feasibility of the integer hashes on discretizing data and discusses the relationship of parameters of the HDBF. The experiments illustrate that, in high-dimensional numerical spaces, the HDBF shows better randomness on distribution and entropy than that of the counting Bloom filter. Compared with the parallel Bloom filters, for a fixed false positive probability, the HDBF displays time-space overheads, and is more suitable to deal with the numerical vectors with high dimensions. Full article
Show Figures

Figure 1

24 pages, 3253 KiB  
Article
Research on the Weighted Dynamic Evolution Model for Space Information Networks Based on Local-World
by Shaobo Yu, Lingda Wu, Xiuqing Mu and Wei Xiong
Information 2018, 9(7), 158; https://doi.org/10.3390/info9070158 - 29 Jun 2018
Cited by 3 | Viewed by 3365
Abstract
As an important national strategy infrastructure, the Space Information Network (SIN) is a powerful platform for future information support, and it plays an important role in many aspects such as national defense, people’s livelihood, etc. In this paper, we review typical and mainstream [...] Read more.
As an important national strategy infrastructure, the Space Information Network (SIN) is a powerful platform for future information support, and it plays an important role in many aspects such as national defense, people’s livelihood, etc. In this paper, we review typical and mainstream topology evolution models in different periods, and analyze the demand for studying the dynamic evolution model of SIN. Combining the concept and characteristics, we analyze the topology structure and local-world phenomenon of SIN, and define the dynamic topology model. Based on the system’s discussion of dynamic evolution rules, we propose a weighted local-world dynamic evolution model of SIN including the construction algorithm and implementation process. We achieve the quantitative analysis from four indicators including the node degree, node strength, edge weight, and correlation of strength and degree. Through the univariate control method, we analyze the impact on the network topology features of parameters: local-world M and extra traffic load α. Simulation results show that they have similar topology structure features with the theoretical analysis and real networks, and also verify the validity and feasibility of the proposed model. Finally, we summarize the advantages and disadvantages of the weighted local-world dynamic evolution model of SIN, and look forward to the future work. The research aim of this paper is to provide some methods and techniques to support the construction and management of SIN. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

22 pages, 369 KiB  
Article
Pythagorean Fuzzy Interaction Muirhead Means with Their Application to Multi-Attribute Group Decision-Making
by Yuan Xu, Xiaopu Shang and Jun Wang
Information 2018, 9(7), 157; https://doi.org/10.3390/info9070157 - 27 Jun 2018
Cited by 20 | Viewed by 3480
Abstract
Due to the increased complexity of real decision-making problems, representing attribute values correctly and appropriately is always a challenge. The recently proposed Pythagorean fuzzy set (PFS) is a powerful and useful tool for handling fuzziness and vagueness. The feature of PFS that the [...] Read more.
Due to the increased complexity of real decision-making problems, representing attribute values correctly and appropriately is always a challenge. The recently proposed Pythagorean fuzzy set (PFS) is a powerful and useful tool for handling fuzziness and vagueness. The feature of PFS that the square sum of membership and non-membership degrees should be less than or equal to one provides more freedom for decision makers to express their assessments and further results in less information loss. The aim of this paper is to develop some Pythagorean fuzzy aggregation operators to aggregate Pythagorean fuzzy numbers (PFNs). Additionally, we propose a novel approach to multi-attribute group decision-making (MAGDM) based on the proposed operators. Considering the Muirhead mean (MM) can capture the interrelationship among all arguments, and the interaction operational rules for PFNs can make calculation results more reasonable, to take full advantage of both, we extend MM to PFSs and propose a family of Pythagorean fuzzy interaction Muirhead mean operators. Some desirable properties and special cases of the proposed operators are also investigated. Further, we present a novel approach to MAGDM with Pythagorean fuzzy information. Finally, we provide a numerical instance to illustrate the validity of the proposed model. In addition, we perform a comparative analysis to show the superiorities of the proposed method. Full article
18 pages, 1468 KiB  
Article
Upsampling for Improved Multidimensional Attribute Space Clustering of Multifield Data
by Vladimir Molchanov and Lars Linsen
Information 2018, 9(7), 156; https://doi.org/10.3390/info9070156 - 27 Jun 2018
Cited by 2 | Viewed by 3855
Abstract
Clustering algorithms in the high-dimensional space require many data to perform reliably and robustly. For multivariate volume data, it is possible to interpolate between the data points in the high-dimensional attribute space based on their spatial relationship in the volumetric domain (or physical [...] Read more.
Clustering algorithms in the high-dimensional space require many data to perform reliably and robustly. For multivariate volume data, it is possible to interpolate between the data points in the high-dimensional attribute space based on their spatial relationship in the volumetric domain (or physical space). Thus, sufficiently high number of data points can be generated, overcoming the curse of dimensionality for this particular type of multidimensional data. We applies this idea to a histogram-based clustering algorithm. We created a uniform partition of the attribute space in multidimensional bins and computed a histogram indicating the number of data samples belonging to each bin. Without interpolation, the analysis was highly sensitive to the histogram cell sizes, yielding inaccurate clustering for improper choices: Large histogram cells result in no cluster separation, while clusters fall apart for small cells. Using an interpolation in physical space, we could refine the data by generating additional samples. The depth of the refinement scheme was chosen according to the local data point distribution in attribute space and the histogram’s bin size. In the case of field discontinuities representing sharp material boundaries in the volume data, the interpolation can be adapted to locally make use of a nearest-neighbor interpolation scheme that avoids averaging values across the sharp boundary. Consequently, we could generate a density computation, where clusters stay connected even when using very small bin sizes. We exploited this result to create a robust hierarchical cluster tree, apply our technique to several datasets, and compare the cluster trees before and after interpolation. Full article
(This article belongs to the Special Issue Selected Papers from IVAPP 2018)
Show Figures

Figure 1

14 pages, 456 KiB  
Article
Efficient Low-Resource Compression of HIFU Data
by Petr Kleparnik, David Barina, Pavel Zemcik and Jiri Jaros
Information 2018, 9(7), 155; https://doi.org/10.3390/info9070155 - 26 Jun 2018
Cited by 3 | Viewed by 4300
Abstract
Large-scale numerical simulations of high-intensity focused ultrasound (HIFU), important for model-based treatment planning, generate large amounts of data. Typically, it is necessary to save hundreds of gigabytes during simulation. We propose a novel algorithm for time-varying simulation data compression specialised for HIFU. Our [...] Read more.
Large-scale numerical simulations of high-intensity focused ultrasound (HIFU), important for model-based treatment planning, generate large amounts of data. Typically, it is necessary to save hundreds of gigabytes during simulation. We propose a novel algorithm for time-varying simulation data compression specialised for HIFU. Our approach is particularly focused on on-the-fly parallel data compression during simulations. The algorithm is able to compress 3D pressure time series of linear and non-linear simulations with very acceptable compression ratios and errors (over 80% of the space can be saved with an acceptable error). The proposed compression enables significant reduction of resources, such as storage space, network bandwidth, CPU time, and so forth, enabling better treatment planning using fast volume data visualisations. The paper describes the proposed method, its experimental evaluation, and comparisons to the state of the arts. Full article
(This article belongs to the Special Issue Information-Centered Healthcare)
Show Figures

Figure 1

27 pages, 15815 KiB  
Article
An Evolutionary Algorithm for an Optimization Model of Edge Bundling
by Joelma De M. Ferreira, Hugo A. D. Do Nascimento and Les R. Foulds
Information 2018, 9(7), 154; https://doi.org/10.3390/info9070154 - 26 Jun 2018
Cited by 5 | Viewed by 4081
Abstract
This paper discusses three edge bundling optimization problems that aim to minimize the total number of bundles of a graph drawing, in conjunction with other aspects, as the main goal. A novel evolutionary algorithm for edge bundling for these problems is described. The [...] Read more.
This paper discusses three edge bundling optimization problems that aim to minimize the total number of bundles of a graph drawing, in conjunction with other aspects, as the main goal. A novel evolutionary algorithm for edge bundling for these problems is described. The algorithm was successfully tested by solving the related problems applied to real-world instances in reasonable computational time. The development and analysis of optimization models have received little attention in the area of edge bundling. However, the reported experimental results demonstrate the effectiveness and the applicability of the proposed evolutionary algorithm to help resolve edge bundling problems by formally defining them as optimization models. Full article
(This article belongs to the Special Issue Selected Papers from IVAPP 2018)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop