Previous Issue

Table of Contents

Information, Volume 8, Issue 4 (December 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-47
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Editorial of the Special Issue “Intelligent Transportation Systems”
Information 2017, 8(4), 146; doi:10.3390/info8040146
Received: 8 November 2017 / Revised: 8 November 2017 / Accepted: 8 November 2017 / Published: 12 November 2017
PDF Full-text (151 KB) | HTML Full-text | XML Full-text
Abstract
Transportation systems are very important in modern life; therefore, massive research efforts have been devoted to this field of study in the recent past. Effective vehicular connectivity techniques can significantly enhance efficiency of travel, reduce traffic incidents and improve safety, and alleviate the
[...] Read more.
Transportation systems are very important in modern life; therefore, massive research efforts have been devoted to this field of study in the recent past. Effective vehicular connectivity techniques can significantly enhance efficiency of travel, reduce traffic incidents and improve safety, and alleviate the impact of congestion, constituting the so-called Intelligent Transportation Systems (ITS) experience.[...] Full article
(This article belongs to the Special Issue Intelligent Transportation Systems)

Research

Jump to: Editorial, Review

Open AccessArticle Leak Location of Pipeline with Multibranch Based on a Cyber-Physical System
Information 2017, 8(4), 113; doi:10.3390/info8040113
Received: 28 August 2017 / Revised: 12 September 2017 / Accepted: 15 September 2017 / Published: 22 September 2017
PDF Full-text (2439 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Data cannot be shared and leakage cannot be located simultaneously among multiple pipeline leak detection systems. Based on cyber-physical system (CPS) architecture, the method for locating leakage for pipelines with multibranch is proposed. The singular point of pressure signals at the ends of
[...] Read more.
Data cannot be shared and leakage cannot be located simultaneously among multiple pipeline leak detection systems. Based on cyber-physical system (CPS) architecture, the method for locating leakage for pipelines with multibranch is proposed. The singular point of pressure signals at the ends of pipeline with multibranch is analyzed by wavelet packet analysis, so that the time feature samples could be established. Then, the Fischer-Burmeister function is introduced into the learning process of the twin support vector machine (TWSVM) in order to avoid the matrix inversion calculation, and the samples are input into the improved twin support vector machine (ITWSVM) to distinguish the pipeline leak location. The simulation results show that the proposed method is more effective than the back propagation (BP) neural networks, the radial basis function (RBF) neural networks, and the Lagrange twin support vector machine. Full article
Figures

Figure 1

Open AccessArticle Predicting DNA Motifs by Using Multi-Objective Hybrid Adaptive Biogeography-Based Optimization
Information 2017, 8(4), 115; doi:10.3390/info8040115
Received: 7 August 2017 / Revised: 6 September 2017 / Accepted: 18 September 2017 / Published: 21 September 2017
PDF Full-text (3372 KB) | HTML Full-text | XML Full-text
Abstract
The computational discovery of DNA motifs is one of the most important problems in molecular biology and computational biology, and it has not yet been resolved in an efficient manner. With previous research, we have solved the single-objective motif discovery problem (MDP) based
[...] Read more.
The computational discovery of DNA motifs is one of the most important problems in molecular biology and computational biology, and it has not yet been resolved in an efficient manner. With previous research, we have solved the single-objective motif discovery problem (MDP) based on biogeography-based optimization (BBO) and gained excellent results. In this study, we apply multi-objective biogeography-based optimization algorithm to the multi-objective motif discovery problem, which refers to discovery of novel transcription factor binding sites in DNA sequences. For this, we propose an improved multi-objective hybridization of adaptive Biogeography-Based Optimization with differential evolution (DE) approach, namely MHABBO, to predict motifs from DNA sequences. In the MHABBO algorithm, the fitness function based on distribution information among the habitat individuals and the Pareto dominance relation are redefined. Based on the relationship between the cost of fitness function and average cost in each generation, the MHABBO algorithm adaptively changes the migration probability and mutation probability. Additionally, the mutation procedure that combines with the DE algorithm is modified. And the migration operators based on the number of iterations are improved to meet motif discovery requirements. Furthermore, the immigration and emigration rates based on a cosine curve are modified. It can therefore generate promising candidate solutions. Statistical comparisons with DEPT and MOGAMOD approaches on three commonly used datasets are provided, which demonstrate the validity and effectiveness of the MHABBO algorithm. Compared with some typical existing approaches, the MHABBO algorithm performs better in terms of the quality of the final solutions. Full article
Figures

Figure 1a

Open AccessArticle Interval Type-2 Fuzzy Model Based on Inverse Controller Design for the Outlet Temperature Control System of Ethylene Cracking Furnace
Information 2017, 8(4), 116; doi:10.3390/info8040116
Received: 30 August 2017 / Revised: 18 September 2017 / Accepted: 20 September 2017 / Published: 22 September 2017
PDF Full-text (4039 KB) | HTML Full-text | XML Full-text
Abstract
Multivariable coupling, nonlinear and large time delays exist in the coil outlet temperature (COT) control system of the ethylene cracking furnace, which make it hard to achieve accurate control over the COT of the furnace in actual production. To solve these problems, an
[...] Read more.
Multivariable coupling, nonlinear and large time delays exist in the coil outlet temperature (COT) control system of the ethylene cracking furnace, which make it hard to achieve accurate control over the COT of the furnace in actual production. To solve these problems, an inverse controller based on an interval type-2 fuzzy model control strategy is introduced. In this paper, the proposed control scheme is divided into two parts: one is the approach structure part of the interval type-2 fuzzy model (IT2-FM), which is utilized to approach the process output. The other is the interval type-2 fuzzy model inverse controller (IT2-FMIC) part, which is utilized to control the output process to achieve the target value. In addition, on the cyber-physical system platform, the actual industrial data are used to test and obtain the mathematical model of the COT control system of the ethylene cracking furnace. Finally, the proposed inverse controller based on the IT2-FM control scheme has been implemented on the COT control system of the ethylene cracking furnace, and the simulation results show that the proposed method is feasible. Full article
Figures

Figure 1

Open AccessArticle Cosine Measures of Linguistic Neutrosophic Numbers and Their Application in Multiple Attribute Group Decision-Making
Information 2017, 8(4), 117; doi:10.3390/info8040117
Received: 21 August 2017 / Revised: 14 September 2017 / Accepted: 19 September 2017 / Published: 22 September 2017
PDF Full-text (269 KB) | HTML Full-text | XML Full-text
Abstract
The linguistic neutrosophic numbers (LNNs) can express the truth, indeterminacy, and falsity degrees independently by three linguistic variables. Hence, they are an effective tool for describing indeterminate linguistic information under linguistic decision-making environments. Similarity measures are usual tools in decision-making problems. However, existing
[...] Read more.
The linguistic neutrosophic numbers (LNNs) can express the truth, indeterminacy, and falsity degrees independently by three linguistic variables. Hence, they are an effective tool for describing indeterminate linguistic information under linguistic decision-making environments. Similarity measures are usual tools in decision-making problems. However, existing cosine similarity measures have been applied in decision-making problems, but they cannot deal with linguistic information under linguistic decision-making environments. To deal with the issue, we propose two cosine similarity measures based on distance and the included angle cosine of two vectors between LNNs. Then, we establish a multiple attribute group decision-making (MAGDM) method based on the cosine similarity measures under an LNN environment. Finally, a practical example about the decision-making problems of investment alternatives is presented to demonstrate the effective applications of the proposed MAGDM method under an LNN environment. Full article
Open AccessArticle A Dynamic Spectrum Allocation Algorithm for a Maritime Cognitive Radio Communication System Based on a Queuing Model
Information 2017, 8(4), 119; doi:10.3390/info8040119
Received: 23 August 2017 / Revised: 15 September 2017 / Accepted: 26 September 2017 / Published: 27 September 2017
PDF Full-text (7762 KB) | HTML Full-text | XML Full-text
Abstract
With the rapid development of maritime digital communication, the demand for spectrum resources is increasing, and building a maritime cognitive radio communication system is an effective solution. In this paper, the problem of how to effectively allocate the spectrum for secondary users (SUs)
[...] Read more.
With the rapid development of maritime digital communication, the demand for spectrum resources is increasing, and building a maritime cognitive radio communication system is an effective solution. In this paper, the problem of how to effectively allocate the spectrum for secondary users (SUs) with different priorities in a maritime cognitive radio communication system is studied. According to the characteristics of a maritime cognitive radio and existing research about cognitive radio systems, this paper establishes a centralized maritime cognitive radio communication model and creates a simplified queuing model with two queues for the communication model. In the view of the behaviors of SUs and primary users (PUs), we propose a dynamic spectrum allocation (DSA) algorithm based on the system status, and analyze it with a two-dimensional Markov chain. Simulation results show that, when different types of SUs have similar arrival rates, the algorithm can vary the priority factor according to the change of users’ status in the system, so as to adjust the channel allocation, decreasing system congestion. The improvement of the algorithm is about 7–26%, and the specific improvement is negatively correlated with the SU arrival rate. Full article
(This article belongs to the Section Information Theory and Methodology)
Figures

Figure 1

Open AccessArticle A Novel Hybrid BND-FOA-LSSVM Model for Electricity Price Forecasting
Information 2017, 8(4), 120; doi:10.3390/info8040120
Received: 31 August 2017 / Revised: 21 September 2017 / Accepted: 25 September 2017 / Published: 28 September 2017
PDF Full-text (2866 KB) | HTML Full-text | XML Full-text
Abstract
Accurate electricity price forecasting plays an important role in the profits of electricity market participants and the healthy development of electricity market. However, the electricity price time series hold the characteristics of volatility and randomness, which make it quite hard to forecast electricity
[...] Read more.
Accurate electricity price forecasting plays an important role in the profits of electricity market participants and the healthy development of electricity market. However, the electricity price time series hold the characteristics of volatility and randomness, which make it quite hard to forecast electricity price accurately. In this paper, a novel hybrid model for electricity price forecasting was proposed combining Beveridge-Nelson decomposition (BND) method, fruit fly optimization algorithm (FOA), and least square support vector machine (LSSVM) model, namely BND-FOA-LSSVM model. Firstly, the original electricity price time series were decomposed into deterministic term, periodic term, and stochastic term by using BND model. Then, these three decomposed terms were forecasted by employing LSSVM model, respectively. Meanwhile, to improve the forecasting performance, a new swarm intelligence optimization algorithm FOA was used to automatically determine the optimal parameters of LSSVM model for deterministic term forecasting, periodic term forecasting, and stochastic term forecasting. Finally, the forecasting result of electricity price can be obtained by multiplying the forecasting values of these three terms. The results show the mean absolute percentage error (MAPE), root mean square error (RMSE) and mean absolute error (MAE) of the proposed BND-FOA-LSSVM model are respectively 3.48%, 11.18 Yuan/MWh and 9.95 Yuan/MWh, which are much smaller than that of LSSVM, BND-LSSVM, FOA-LSSVM, auto-regressive integrated moving average (ARIMA), and empirical mode decomposition (EMD)-FOA-LSSVM models. The proposed BND-FOA-LSSVM model is effective and practical for electricity price forecasting, which can improve the electricity price forecasting accuracy. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle Offset Free Tracking Predictive Control Based on Dynamic PLS Framework
Information 2017, 8(4), 121; doi:10.3390/info8040121
Received: 30 August 2017 / Revised: 23 September 2017 / Accepted: 25 September 2017 / Published: 10 October 2017
PDF Full-text (2384 KB) | HTML Full-text | XML Full-text
Abstract
This paper develops an offset free tracking model predictive control based on a dynamic partial least square (PLS) framework. First, state space model is used as the inner model of PLS to describe the dynamic system, where subspace identification method is used to
[...] Read more.
This paper develops an offset free tracking model predictive control based on a dynamic partial least square (PLS) framework. First, state space model is used as the inner model of PLS to describe the dynamic system, where subspace identification method is used to identify the inner model. Based on the obtained model, multiple independent model predictive control (MPC) controllers are designed. Due to the decoupling character of PLS, these controllers are running separately, which is suitable for distributed control framework. In addition, the increment of inner model output is considered in the cost function of MPC, which involves integral action in the controller. Hence, the offset free tracking performance is guaranteed. The results of an industry background simulation demonstrate the effectiveness of proposed method. Full article
Figures

Figure 1

Open AccessArticle Neutrosophic Similarity Score Based Weighted Histogram for Robust Mean-Shift Tracking
Information 2017, 8(4), 122; doi:10.3390/info8040122
Received: 25 August 2017 / Revised: 29 September 2017 / Accepted: 30 September 2017 / Published: 2 October 2017
PDF Full-text (3361 KB) | HTML Full-text | XML Full-text
Abstract
Visual object tracking is a critical task in computer vision. Challenging things always exist when an object needs to be tracked. For instance, background clutter is one of the most challenging problems. The mean-shift tracker is quite popular because of its efficiency and
[...] Read more.
Visual object tracking is a critical task in computer vision. Challenging things always exist when an object needs to be tracked. For instance, background clutter is one of the most challenging problems. The mean-shift tracker is quite popular because of its efficiency and performance in a range of conditions. However, the challenge of background clutter also disturbs its performance. In this article, we propose a novel weighted histogram based on neutrosophic similarity score to help the mean-shift tracker discriminate the target from the background. Neutrosophic set (NS) is a new branch of philosophy for dealing with incomplete, indeterminate, and inconsistent information. In this paper, we utilize the single valued neutrosophic set (SVNS), which is a subclass of NS to improve the mean-shift tracker. First, two kinds of criteria are considered as the object feature similarity and the background feature similarity, and each bin of the weight histogram is represented in the SVNS domain via three membership functions T(Truth), I(indeterminacy), and F(Falsity). Second, the neutrosophic similarity score function is introduced to fuse those two criteria and to build the final weight histogram. Finally, a novel neutrosophic weighted mean-shift tracker is proposed. The proposed tracker is compared with several mean-shift based trackers on a dataset of 61 public sequences. The results revealed that our method outperforms other trackers, especially when confronting background clutter. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Figures

Figure 1

Open AccessArticle Efficient Data Collection by Mobile Sink to Detect Phenomena in Internet of Things
Information 2017, 8(4), 123; doi:10.3390/info8040123
Received: 13 September 2017 / Revised: 25 September 2017 / Accepted: 28 September 2017 / Published: 3 October 2017
PDF Full-text (1618 KB) | HTML Full-text | XML Full-text
Abstract
With the rapid development of Internet of Things (IoT), more and more static and mobile sensors are being deployed for sensing and tracking environmental phenomena, such as fire, oil spills and air pollution. As these sensors are usually battery-powered, energy-efficient algorithms
[...] Read more.
With the rapid development of Internet of Things (IoT), more and more static and mobile sensors are being deployed for sensing and tracking environmental phenomena, such as fire, oil spills and air pollution. As these sensors are usually battery-powered, energy-efficient algorithms are required to extend the sensors’ lifetime. Moreover, forwarding sensed data towards a static sink causes quick battery depletion of the sinks’ nearby sensors. Therefore, in this paper, we propose a distributed energy-efficient algorithm, called the Hilbert-order Collection Strategy (HCS), which uses a mobile sink (e.g., drone) to collect data from a mobile wireless sensor network (mWSN) and detect environmental phenomena. The mWSN consists of mobile sensors that sense environmental data. These mobile sensors self-organize themselves into groups. The sensors of each group elect a group head (GH), which collects data from the mobile sensors in its group. Periodically, a mobile sink passes by the locations of the GHs (data collection path) to collect their data. The collected data are aggregated to discover a global phenomenon. To shorten the data collection path, which results in reducing the energy cost, the mobile sink establishes the path based on the order of Hilbert values of the GHs’ locations. Furthermore, the paper proposes two optimization techniques for data collection to further reduce the energy cost of mWSN and reduce the data loss. Full article
Figures

Figure 1

Open AccessArticle Multi-Path Data Distribution Mechanism Based on RPL for Energy Consumption and Time Delay
Information 2017, 8(4), 124; doi:10.3390/info8040124
Received: 4 September 2017 / Revised: 16 September 2017 / Accepted: 2 October 2017 / Published: 9 October 2017
PDF Full-text (5042 KB) | HTML Full-text | XML Full-text
Abstract
The RPL (Routing Protocol for LLN) protocol is a routing protocol for low power and lossy networks. In such a network, energy is a very scarce resource, so many studies are focused on minimizing global energy consumption. End-to-end latency is another important performance
[...] Read more.
The RPL (Routing Protocol for LLN) protocol is a routing protocol for low power and lossy networks. In such a network, energy is a very scarce resource, so many studies are focused on minimizing global energy consumption. End-to-end latency is another important performance indicator of the network, but existing research tends to focus more on energy consumption and ignore the end-to-end delay of data transmission. In this paper, we propose a kind of energy equalization routing protocol to maximize the surviving time of the restricted nodes so that the energy consumed by each node is close to each other. At the same time, a multi-path forwarding route is proposed based on the cache utilization. The data is sent to the sink node through different parent nodes at a certain probability, not only by selecting the preferred parent node, thus avoiding buffer overflow and reducing end-to-end delay. Finally, the two algorithms are combined to accommodate different application scenarios. The experimental results show that the proposed three improved schemes improve the reliability of the routing, extend the lifetime of the network, reduce the end-to-end delay, and reduce the number of DAG reconfigurations. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle TODIM Method for Single-Valued Neutrosophic Multiple Attribute Decision Making
Information 2017, 8(4), 125; doi:10.3390/info8040125
Received: 20 September 2017 / Revised: 9 October 2017 / Accepted: 11 October 2017 / Published: 16 October 2017
Cited by 1 | PDF Full-text (292 KB) | HTML Full-text | XML Full-text
Abstract
Recently, the TODIM has been used to solve multiple attribute decision making (MADM) problems. The single-valued neutrosophic sets (SVNSs) are useful tools to depict the uncertainty of the MADM. In this paper, we will extend the TODIM method to the MADM with the
[...] Read more.
Recently, the TODIM has been used to solve multiple attribute decision making (MADM) problems. The single-valued neutrosophic sets (SVNSs) are useful tools to depict the uncertainty of the MADM. In this paper, we will extend the TODIM method to the MADM with the single-valued neutrosophic numbers (SVNNs). Firstly, the definition, comparison, and distance of SVNNs are briefly presented, and the steps of the classical TODIM method for MADM problems are introduced. Then, the extended classical TODIM method is proposed to deal with MADM problems with the SVNNs, and its significant characteristic is that it can fully consider the decision makers’ bounded rationality which is a real action in decision making. Furthermore, we extend the proposed model to interval neutrosophic sets (INSs). Finally, a numerical example is proposed. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Open AccessFeature PaperArticle A Novel Grey Prediction Model Combining Markov Chain with Functional-Link Net and Its Application to Foreign Tourist Forecasting
Information 2017, 8(4), 126; doi:10.3390/info8040126
Received: 31 August 2017 / Revised: 29 September 2017 / Accepted: 3 October 2017 / Published: 13 October 2017
PDF Full-text (833 KB) | HTML Full-text | XML Full-text
Abstract
Grey prediction models for time series have been widely applied to demand forecasting because only limited data are required for them to build a time series model without any statistical assumptions. Previous studies have demonstrated that the combination of grey prediction with neural
[...] Read more.
Grey prediction models for time series have been widely applied to demand forecasting because only limited data are required for them to build a time series model without any statistical assumptions. Previous studies have demonstrated that the combination of grey prediction with neural networks helps grey prediction perform better. Some methods have been presented to improve the prediction accuracy of the popular GM(1,1) model by using the Markov chain to estimate the residual needed to modify a predicted value. Compared to the previous Grey-Markov models, this study contributes to apply the functional-link net to estimate the degree to which a predicted value obtained from the GM(1,1) model can be adjusted. Furthermore, the troublesome number of states and their bounds that are not easily specified in Markov chain have been determined by a genetic algorithm. To verify prediction performance, the proposed grey prediction model was applied to an important grey system problem—foreign tourist forecasting. Experimental results show that the proposed model provides satisfactory results compared to the other Grey-Markov models considered. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessFeature PaperArticle Neutrosophic N-Structures Applied to BCK/BCI-Algebras
Information 2017, 8(4), 128; doi:10.3390/info8040128
Received: 12 September 2017 / Accepted: 6 October 2017 / Published: 16 October 2017
Cited by 1 | PDF Full-text (262 KB) | HTML Full-text | XML Full-text
Abstract
Neutrosophic N-structures with applications in BCK/BCI-algebras is discussed. The notions of a neutrosophic N-subalgebra and a (closed) neutrosophic N-ideal in a BCK/BCI-algebra are introduced, and several
[...] Read more.
Neutrosophic N -structures with applications in B C K / B C I -algebras is discussed. The notions of a neutrosophic N -subalgebra and a (closed) neutrosophic N -ideal in a B C K / B C I -algebra are introduced, and several related properties are investigated. Characterizations of a neutrosophic N -subalgebra and a neutrosophic N -ideal are considered, and relations between a neutrosophic N -subalgebra and a neutrosophic N -ideal are stated. Conditions for a neutrosophic N -ideal to be a closed neutrosophic N -ideal are provided. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Open AccessArticle On the Implementation of a Cloud-Based Computing Test Bench Environment for Prolog Systems
Information 2017, 8(4), 129; doi:10.3390/info8040129
Received: 13 September 2017 / Revised: 10 October 2017 / Accepted: 13 October 2017 / Published: 19 October 2017
PDF Full-text (518 KB) | HTML Full-text | XML Full-text
Abstract
Software testing and benchmarking are key components of the software development process. Nowadays, a good practice in large software projects is the continuous integration (CI) software development technique. The key idea of CI is to let developers integrate their work as they produce
[...] Read more.
Software testing and benchmarking are key components of the software development process. Nowadays, a good practice in large software projects is the continuous integration (CI) software development technique. The key idea of CI is to let developers integrate their work as they produce it, instead of performing the integration at the end of each software module. In this paper, we extend a previous work on a benchmark suite for the YAP Prolog system, and we propose a fully automated test bench environment for Prolog systems, named Yet Another Prolog Test Bench Environment (YAPTBE), aimed to assist developers in the development and CI of Prolog systems. YAPTBE is based on a cloud computing architecture and relies on the Jenkins framework as well as a new Jenkins plugin to manage the underlying infrastructure. We present the key design and implementation aspects of YAPTBE and show its most important features, such as its graphical user interface (GUI) and the automated process that builds and runs Prolog systems and benchmarks. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle Neutrosophic Commutative N -Ideals in BCK-Algebras
Information 2017, 8(4), 130; doi:10.3390/info8040130
Received: 16 September 2017 / Revised: 6 October 2017 / Accepted: 16 October 2017 / Published: 18 October 2017
PDF Full-text (251 KB) | HTML Full-text | XML Full-text
Abstract
The notion of a neutrosophic commutative N -ideal in BCK-algebras is introduced, and several properties are investigated. Relations between a neutrosophic N-ideal and a neutrosophic commutative N-ideal are discussed. Characterizations of a neutrosophic commutative N-ideal are considered. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Open AccessArticle Certain Competition Graphs Based on Intuitionistic Neutrosophic Environment
Information 2017, 8(4), 132; doi:10.3390/info8040132
Received: 7 September 2017 / Revised: 18 October 2017 / Accepted: 19 October 2017 / Published: 24 October 2017
PDF Full-text (396 KB) | HTML Full-text | XML Full-text
Abstract
The concept of intuitionistic neutrosophic sets provides an additional possibility to represent imprecise, uncertain, inconsistent and incomplete information, which exists in real situations. This research article first presents the notion of intuitionistic neutrosophic competition graphs. Then, p-competition intuitionistic neutrosophic graphs and m
[...] Read more.
The concept of intuitionistic neutrosophic sets provides an additional possibility to represent imprecise, uncertain, inconsistent and incomplete information, which exists in real situations. This research article first presents the notion of intuitionistic neutrosophic competition graphs. Then, p-competition intuitionistic neutrosophic graphs and m-step intuitionistic neutrosophic competition graphs are discussed. Further, applications of intuitionistic neutrosophic competition graphs in ecosystem and career competition are described. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Figures

Figure 1

Open AccessArticle Bi-Objective Economic Dispatch of Micro Energy Internet Incorporating Energy Router
Information 2017, 8(4), 133; doi:10.3390/info8040133
Received: 10 September 2017 / Revised: 9 October 2017 / Accepted: 10 October 2017 / Published: 26 October 2017
PDF Full-text (1119 KB) | HTML Full-text | XML Full-text
Abstract
Integration of different energy networks will increase additional flexibility to system operation. The key component in such a coupled infrastructure is the energy router, which plays an important role in energy transition and storage to smoothing the prediction error both in renewables and
[...] Read more.
Integration of different energy networks will increase additional flexibility to system operation. The key component in such a coupled infrastructure is the energy router, which plays an important role in energy transition and storage to smoothing the prediction error both in renewables and load. The router has the multi-carrier energy generation capability, and builds physical linkages among the power network, heat network, and other networks in the micro energy internet. The economic dispatch problem of the micro energy internet is formulated as a bi-objective optimization problem. Golden section search method is adopted to locate a compromising solution in the sense of Nash Bargaining. Case studies on a typical test system verify the effectiveness of the proposed bi-objective dispatch model and solution method. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle Enhancement of Low Contrast Images Based on Effective Space Combined with Pixel Learning
Information 2017, 8(4), 135; doi:10.3390/info8040135
Received: 19 September 2017 / Revised: 27 October 2017 / Accepted: 27 October 2017 / Published: 1 November 2017
PDF Full-text (12469 KB) | HTML Full-text | XML Full-text
Abstract
Images captured in bad conditions often suffer from low contrast. In this paper, we proposed a simple, but efficient linear restoration model to enhance the low contrast images. The model’s design is based on the effective space of the 3D surface graph of
[...] Read more.
Images captured in bad conditions often suffer from low contrast. In this paper, we proposed a simple, but efficient linear restoration model to enhance the low contrast images. The model’s design is based on the effective space of the 3D surface graph of the image. Effective space is defined as the minimum space containing the 3D surface graph of the image, and the proportion of the pixel value in the effective space is considered to reflect the details of images. The bright channel prior and the dark channel prior are used to estimate the effective space, however, they may cause block artifacts. We designed the pixel learning to solve this problem. Pixel learning takes the input image as the training example and the low frequency component of input as the label to learn (pixel by pixel) based on the look-up table model. The proposed method is very fast and can restore a high-quality image with fine details. The experimental results on a variety of images captured in bad conditions, such as nonuniform light, night, hazy and underwater, demonstrate the effectiveness and efficiency of the proposed method. Full article
Figures

Figure 1a

Open AccessArticle Fuzzy Extractor and Elliptic Curve Based Efficient User Authentication Protocol for Wireless Sensor Networks and Internet of Things
Information 2017, 8(4), 136; doi:10.3390/info8040136
Received: 21 September 2017 / Revised: 17 October 2017 / Accepted: 24 October 2017 / Published: 30 October 2017
PDF Full-text (501 KB) | HTML Full-text | XML Full-text
Abstract
To improve the quality of service and reduce the possibility of security attacks, a secure and efficient user authentication mechanism is required for Wireless Sensor Networks (WSNs) and the Internet of Things (IoT). Session key establishment between the sensor node and the user
[...] Read more.
To improve the quality of service and reduce the possibility of security attacks, a secure and efficient user authentication mechanism is required for Wireless Sensor Networks (WSNs) and the Internet of Things (IoT). Session key establishment between the sensor node and the user is also required for secure communication. In this paper, we perform the security analysis of A.K.Das’s user authentication scheme (given in 2015), Choi et al.’s scheme (given in 2016), and Park et al.’s scheme (given in 2016). The security analysis shows that their schemes are vulnerable to various attacks like user impersonation attack, sensor node impersonation attack and attacks based on legitimate users. Based on the cryptanalysis of these existing protocols, we propose a secure and efficient authenticated session key establishment protocol which ensures various security features and overcomes the drawbacks of existing protocols. The formal and informal security analysis indicates that the proposed protocol withstands the various security vulnerabilities involved in WSNs. The automated validation using AVISPA and Scyther tool ensures the absence of security attacks in our scheme. The logical verification using the Burrows-Abadi-Needham (BAN) logic confirms the correctness of the proposed protocol. Finally, the comparative analysis based on computational overhead and security features of other existing protocol indicate that the proposed user authentication system is secure and efficient. In future, we intend to implement the proposed protocol in real-world applications of WSNs and IoT. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessFeature PaperArticle A Distributed Ledger for Supply Chain Physical Distribution Visibility
Information 2017, 8(4), 137; doi:10.3390/info8040137
Received: 9 September 2017 / Revised: 21 October 2017 / Accepted: 30 October 2017 / Published: 2 November 2017
PDF Full-text (956 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Supply chains (SC) span many geographies, modes and industries and involve several phases where data flows in both directions from suppliers, manufacturers, distributors, retailers, to customers. This data flow is necessary to support critical business decisions that may impact product cost and market
[...] Read more.
Supply chains (SC) span many geographies, modes and industries and involve several phases where data flows in both directions from suppliers, manufacturers, distributors, retailers, to customers. This data flow is necessary to support critical business decisions that may impact product cost and market share. Current SC information systems are unable to provide validated, pseudo real-time shipment tracking during the distribution phase. This information is available from a single source, often the carrier, and is shared with other stakeholders on an as-needed basis. This paper introduces an independent, crowd-validated, online shipment tracking framework that complements current enterprise-based SC management solutions. The proposed framework consists of a set of private distributed ledgers and a single blockchain public ledger. Each private ledger allows the private sharing of custody events among the trading partners in a given shipment. Privacy is necessary, for example, when trading high-end products or chemical and pharmaceutical products. The second type of ledger is a blockchain public ledger. It consists of the hash code of each private event in addition to monitoring events. The latter provide an independently validated immutable record of the pseudo real-time geolocation status of the shipment from a large number of sources using commuters-sourcing. Full article
Figures

Figure 1

Open AccessFeature PaperArticle MR Brain Image Segmentation: A Framework to Compare Different Clustering Techniques
Information 2017, 8(4), 138; doi:10.3390/info8040138
Received: 8 October 2017 / Revised: 1 November 2017 / Accepted: 1 November 2017 / Published: 4 November 2017
PDF Full-text (1124 KB) | HTML Full-text | XML Full-text
Abstract
In Magnetic Resonance (MR) brain image analysis, segmentation is commonly used for detecting, measuring and analyzing the main anatomical structures of the brain and eventually identifying pathological regions. Brain image segmentation is of fundamental importance since it helps clinicians and researchers to concentrate
[...] Read more.
In Magnetic Resonance (MR) brain image analysis, segmentation is commonly used for detecting, measuring and analyzing the main anatomical structures of the brain and eventually identifying pathological regions. Brain image segmentation is of fundamental importance since it helps clinicians and researchers to concentrate on specific regions of the brain in order to analyze them. However, segmentation of brain images is a difficult task due to high similarities and correlations of intensity among different regions of the brain image. Among various methods proposed in the literature, clustering algorithms prove to be successful tools for image segmentation. In this paper, we present a framework for image segmentation that is devoted to support the expert in identifying different brain regions for further analysis. The framework includes different clustering methods to perform segmentation of MR images. Furthermore, it enables easy comparison of different segmentation results by providing a quantitative evaluation using an entropy-based measure as well as other measures commonly used to evaluate segmentation results. To show the potential of the framework, the implemented clustering methods are compared on simulated T1-weighted MR brain images from the Internet Brain Segmentation Repository (IBSR database) provided with ground truth segmentation. Full article
(This article belongs to the Special Issue Fuzzy Logic for Image Processing)
Figures

Figure 1

Open AccessArticle Structural and Symbolic Information in the Context of the General Theory of Information
Information 2017, 8(4), 139; doi:10.3390/info8040139
Received: 26 September 2017 / Revised: 26 October 2017 / Accepted: 1 November 2017 / Published: 6 November 2017
PDF Full-text (234 KB) | HTML Full-text | XML Full-text
Abstract
The general theory of information, which includes syntactic, semantic, pragmatic, and many other special theories of information, provides theoretical and practical tools for discerning a very large diversity of different kinds, types, and classes of information. Some of these kinds, types, and classes
[...] Read more.
The general theory of information, which includes syntactic, semantic, pragmatic, and many other special theories of information, provides theoretical and practical tools for discerning a very large diversity of different kinds, types, and classes of information. Some of these kinds, types, and classes are more important and some are less important. Two basic classes are formed by structural and symbolic information. While structural information is intrinsically imbedded in the structure of the corresponding object or domain, symbolic information is represented by symbols, the meaning of which is subject to arbitrary conventions between people. As a result, symbolic information exists only in the context of life, including technical and theoretical constructs created by humans. Structural information is related to any objects, systems, and processes regardless of the existence or presence of life. In this paper, properties of structural and symbolic information are explored in the formal framework of the general theory of information developed by Burgin because this theory offers more powerful instruments for this inquiry. Structural information is further differentiated into inherent, descriptive, and constructive types. Properties of correctness and uniqueness of these types are investigated. In addition, predictive power of symbolic information accumulated in the course of natural evolution is considered. The phenomenon of ritualization is described as a general transition process from structural to symbolic information. Full article
Open AccessArticle An Opportunistic Routing for Data Forwarding Based on Vehicle Mobility Association in Vehicular Ad Hoc Networks
Information 2017, 8(4), 140; doi:10.3390/info8040140
Received: 22 September 2017 / Revised: 19 October 2017 / Accepted: 23 October 2017 / Published: 7 November 2017
PDF Full-text (928 KB) | HTML Full-text | XML Full-text
Abstract
Vehicular ad hoc networks (VANETs) have emerged as a new powerful technology for data transmission between vehicles. Efficient data transmission accompanied with low data delay plays an important role in selecting the ideal data forwarding path in VANETs. This paper proposes a new
[...] Read more.
Vehicular ad hoc networks (VANETs) have emerged as a new powerful technology for data transmission between vehicles. Efficient data transmission accompanied with low data delay plays an important role in selecting the ideal data forwarding path in VANETs. This paper proposes a new opportunity routing protocol for data forwarding based on vehicle mobility association (OVMA). With assistance from the vehicle mobility association, data can be forwarded without passing through many extra intermediate nodes. Besides, each vehicle carries the only replica information to record its associated vehicle information, so the routing decision can adapt to the vehicle densities. Simulation results show that the OVMA protocol can extend the network lifetime, improve the performance of data delivery ratio, and reduce the data delay and routing overhead when compared to the other well-known routing protocols. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle Rate Optimization of Two-Way Relaying with Wireless Information and Power Transfer
Information 2017, 8(4), 141; doi:10.3390/info8040141
Received: 19 September 2017 / Revised: 10 October 2017 / Accepted: 1 November 2017 / Published: 8 November 2017
PDF Full-text (998 KB) | HTML Full-text | XML Full-text
Abstract
We consider the simultaneous wireless information and power transfer in two-phase decode-and-forward two-way relaying networks, where a relay harvests the energy from the signal to be relayed through either power splitting or time splitting. Here, we formulate the resource allocation problems optimizing the
[...] Read more.
We consider the simultaneous wireless information and power transfer in two-phase decode-and-forward two-way relaying networks, where a relay harvests the energy from the signal to be relayed through either power splitting or time splitting. Here, we formulate the resource allocation problems optimizing the time-phase and signal splitting ratios to maximize the sum rate of the two communicating devices. The joint optimization problems are shown to be convex for both the power splitting and time splitting approaches after some transformation if required to be solvable with an existing solver. To lower the computational complexity, we also present the suboptimal methods optimizing the splitting ratio for the fixed time-phase and derive a closed-form solution for the suboptimal method based on the power splitting. The results demonstrate that the power splitting approaches outperform their time splitting counterparts and the suboptimal power splitting approach provides a performance close to the optimal one while reducing the complexity significantly. Full article
(This article belongs to the Special Issue Wireless Energy Harvesting for Future Wireless Communications)
Figures

Figure 1

Open AccessArticle Arabic Handwritten Digit Recognition Based on Restricted Boltzmann Machine and Convolutional Neural Networks
Information 2017, 8(4), 142; doi:10.3390/info8040142
Received: 14 August 2017 / Revised: 6 November 2017 / Accepted: 8 November 2017 / Published: 9 November 2017
PDF Full-text (2062 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Handwritten digit recognition is an open problem in computer vision and pattern recognition, and solving this problem has elicited increasing interest. The main challenge of this problem is the design of an efficient method that can recognize the handwritten digits that are submitted
[...] Read more.
Handwritten digit recognition is an open problem in computer vision and pattern recognition, and solving this problem has elicited increasing interest. The main challenge of this problem is the design of an efficient method that can recognize the handwritten digits that are submitted by the user via digital devices. Numerous studies have been proposed in the past and in recent years to improve handwritten digit recognition in various languages. Research on handwritten digit recognition in Arabic is limited. At present, deep learning algorithms are extremely popular in computer vision and are used to solve and address important problems, such as image classification, natural language processing, and speech recognition, to provide computers with sensory capabilities that reach the ability of humans. In this study, we propose a new approach for Arabic handwritten digit recognition by use of restricted Boltzmann machine (RBM) and convolutional neural network (CNN) deep learning algorithms. In particular, we propose an Arabic handwritten digit recognition approach that works in two phases. First, we use the RBM, which is a deep learning technique that can extract highly useful features from raw data, and which has been utilized in several classification problems as a feature extraction technique in the feature extraction phase. Then, the extracted features are fed to an efficient CNN architecture with a deep supervised learning architecture for the training and testing process. In the experiment, we used the CMATERDB 3.3.1 Arabic handwritten digit dataset for training and testing the proposed method. Experimental results show that the proposed method significantly improves the accuracy rate, with accuracy reaching 98.59%. Finally, comparison of our results with those of other studies on the CMATERDB 3.3.1 Arabic handwritten digit dataset shows that our approach achieves the highest accuracy rate. Full article
Figures

Open AccessArticle The Impact of Message Replication on the Performance of Opportunistic Networks for Sensed Data Collection
Information 2017, 8(4), 143; doi:10.3390/info8040143
Received: 3 October 2017 / Revised: 6 November 2017 / Accepted: 6 November 2017 / Published: 9 November 2017
PDF Full-text (6481 KB) | HTML Full-text | XML Full-text
Abstract
Opportunistic networks (OppNets) provide a scalable solution for collecting delay‑tolerant data from sensors for their respective gateways. Portable handheld user devices contribute significantly to the scalability of OppNets since their number increases according to user population and they closely follow human movement patterns.
[...] Read more.
Opportunistic networks (OppNets) provide a scalable solution for collecting delay‑tolerant data from sensors for their respective gateways. Portable handheld user devices contribute significantly to the scalability of OppNets since their number increases according to user population and they closely follow human movement patterns. Hence, OppNets for sensed data collection are characterised by high node population and degrees of spatial locality inherent to user movement. We study the impact of these characteristics on the performance of existing OppNet message replication techniques. Our findings reveal that the existing replication techniques are not specifically designed to cope with these characteristics. This raises concerns regarding excessive message transmission overhead and throughput degradations due to resource constraints and technological limitations associated with portable handheld user devices. Based on concepts derived from the study, we suggest design guidelines to augment existing message replication techniques. We also follow our design guidelines to propose a message replication technique, namely Locality Aware Replication (LARep). Simulation results show that LARep achieves better network performance under high node population and degrees of spatial locality as compared with existing techniques. Full article
Figures

Figure 1

Open AccessArticle VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making
Information 2017, 8(4), 144; doi:10.3390/info8040144
Received: 21 October 2017 / Revised: 7 November 2017 / Accepted: 8 November 2017 / Published: 10 November 2017
PDF Full-text (267 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje) method to multiple attribute group decision-making (MAGDM) with interval neutrosophic numbers (INNs). Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information
[...] Read more.
In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje) method to multiple attribute group decision-making (MAGDM) with interval neutrosophic numbers (INNs). Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA) operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Open AccessArticle End-to-End Delay Model for Train Messaging over Public Land Mobile Networks
Information 2017, 8(4), 145; doi:10.3390/info8040145
Received: 16 October 2017 / Revised: 7 November 2017 / Accepted: 8 November 2017 / Published: 11 November 2017
PDF Full-text (4005 KB) | HTML Full-text | XML Full-text
Abstract
Modern train control systems rely on a dedicated radio network for train to ground communications. A number of possible alternatives have been analysed to adopt the European Rail Traffic Management System/European Train Control System (ERTMS/ETCS) control system on local/regional lines to improve transport
[...] Read more.
Modern train control systems rely on a dedicated radio network for train to ground communications. A number of possible alternatives have been analysed to adopt the European Rail Traffic Management System/European Train Control System (ERTMS/ETCS) control system on local/regional lines to improve transport capacity. Among them, a communication system based on public networks (cellular&satellite) provides an interesting, effective and alternative solution to proprietary and expensive radio networks. To analyse performance of this solution, it is necessary to model the end-to-end delay and message loss to fully characterize the message transfer process from train to ground and vice versa. Starting from the results of a railway test campaign over a 300 km railway line for a cumulative 12,000 traveled km in 21 days, in this paper, we derive a statistical model for the end-to-end delay required for delivering messages. In particular, we propose a two states model allowing for reproducing the main behavioral characteristics of the end-to-end delay as observed experimentally. Model formulation has been derived after deep analysis of the recorded experimental data. When it is applied to model a realistic scenario, it allows for explicitly accounting for radio coverage characteristics, the received power level, the handover points along the line and for the serving radio technology. As an example, the proposed model is used to generate the end-to-end delay profile in a realistic scenario. Full article
Figures

Figure 1

Open AccessArticle Land Cover Classification from Multispectral Data Using Computational Intelligence Tools: A Comparative Study
Information 2017, 8(4), 147; doi:10.3390/info8040147
Received: 8 October 2017 / Revised: 7 November 2017 / Accepted: 13 November 2017 / Published: 15 November 2017
PDF Full-text (7230 KB) | HTML Full-text | XML Full-text
Abstract
This article discusses how computational intelligence techniques are applied to fuse spectral images into a higher level image of land cover distribution for remote sensing, specifically for satellite image classification. We compare a fuzzy-inference method with two other computational intelligence methods, decision trees
[...] Read more.
This article discusses how computational intelligence techniques are applied to fuse spectral images into a higher level image of land cover distribution for remote sensing, specifically for satellite image classification. We compare a fuzzy-inference method with two other computational intelligence methods, decision trees and neural networks, using a case study of land cover classification from satellite images. Further, an unsupervised approach based on k-means clustering has been also taken into consideration for comparison. The fuzzy-inference method includes training the classifier with a fuzzy-fusion technique and then performing land cover classification using reinforcement aggregation operators. To assess the robustness of the four methods, a comparative study including three years of land cover maps for the district of Mandimba, Niassa province, Mozambique, was undertaken. Our results show that the fuzzy-fusion method performs similarly to decision trees, achieving reliable classifications; neural networks suffer from overfitting; while k-means clustering constitutes a promising technique to identify land cover types from unknown areas. Full article
(This article belongs to the Special Issue Fuzzy Logic for Image Processing)
Figures

Figure 1

Open AccessArticle Source Code Documentation Generation Using Program Execution
Information 2017, 8(4), 148; doi:10.3390/info8040148
Received: 30 September 2017 / Revised: 13 November 2017 / Accepted: 14 November 2017 / Published: 17 November 2017
PDF Full-text (291 KB) | HTML Full-text | XML Full-text
Abstract
Automated source code documentation approaches often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we describe DynamiDoc: a simple automated documentation generator based on dynamic analysis. Our representation-based approach
[...] Read more.
Automated source code documentation approaches often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we describe DynamiDoc: a simple automated documentation generator based on dynamic analysis. Our representation-based approach traces the program being executed and records string representations of concrete argument values, a return value and a target object state before and after each method execution. Then, for each method, it generates documentation sentences with examples, such as “When called on [3, 1.2] with element = 3, the object changed to [1.2]”. Advantages and shortcomings of the approach are listed. We also found out that the generated sentences are substantially shorter than the methods they describe. According to our small-scale study, the majority of objects in the generated documentation have their string representations overridden, which further confirms the potential usefulness of our approach. Finally, we propose an alternative, variable-based approach that describes the values of individual member variables, rather than the state of an object as a whole. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle NC-TODIM-Based MAGDM under a Neutrosophic Cubic Set Environment
Information 2017, 8(4), 149; doi:10.3390/info8040149
Received: 19 October 2017 / Revised: 11 November 2017 / Accepted: 14 November 2017 / Published: 18 November 2017
PDF Full-text (1647 KB) | HTML Full-text | XML Full-text
Abstract
A neutrosophic cubic set is the hybridization of the concept of a neutrosophic set and an interval neutrosophic set. A neutrosophic cubic set has the capacity to express the hybrid information of both the interval neutrosophic set and the single valued neutrosophic set
[...] Read more.
A neutrosophic cubic set is the hybridization of the concept of a neutrosophic set and an interval neutrosophic set. A neutrosophic cubic set has the capacity to express the hybrid information of both the interval neutrosophic set and the single valued neutrosophic set simultaneously. As newly defined, little research on the operations and applications of neutrosophic cubic sets has been reported in the current literature. In the present paper, we propose the score and accuracy functions for neutrosophic cubic sets and prove their basic properties. We also develop a strategy for ranking of neutrosophic cubic numbers based on the score and accuracy functions. We firstly develop a TODIM (Tomada de decisao interativa e multicritévio) in the neutrosophic cubic set (NC) environment, which we call the NC-TODIM. We establish a new NC-TODIM strategy for solving multi attribute group decision making (MAGDM) in neutrosophic cubic set environment. We illustrate the proposed NC-TODIM strategy for solving a multi attribute group decision making problem to show the applicability and effectiveness of the developed strategy. We also conduct sensitivity analysis to show the impact of ranking order of the alternatives for different values of the attenuation factor of losses for multi-attribute group decision making strategies. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Figures

Open AccessArticle Investigating the Statistical Distribution of Learning Coverage in MOOCs
Information 2017, 8(4), 150; doi:10.3390/info8040150
Received: 30 September 2017 / Revised: 17 November 2017 / Accepted: 17 November 2017 / Published: 20 November 2017
PDF Full-text (531 KB) | HTML Full-text | XML Full-text
Abstract
Learners participating in Massive Open Online Courses (MOOC) have a wide range of backgrounds and motivations. Many MOOC learners enroll in the courses to take a brief look; only a few go through the entire content, and even fewer are able to eventually
[...] Read more.
Learners participating in Massive Open Online Courses (MOOC) have a wide range of backgrounds and motivations. Many MOOC learners enroll in the courses to take a brief look; only a few go through the entire content, and even fewer are able to eventually obtain a certificate. We discovered this phenomenon after having examined 92 courses on both xuetangX and edX platforms. More specifically, we found that the learning coverage in many courses—one of the metrics used to estimate the learners’ active engagement with the online courses—observes a Zipf distribution. We apply the maximum likelihood estimation method to fit the Zipf’s law and test our hypothesis using a chi-square test. In the xuetangX dataset, the learning coverage in 53 of 76 courses fits Zipf’s law, but in all of 16 courses on the edX platform, the learning coverage rejects the Zipf’s law. The result from our study is expected to bring insight to the unique learning behavior on MOOC. Full article
(This article belongs to the Special Issue Supporting Technologies and Enablers for Big Data)
Figures

Figure 1

Open AccessArticle A New Anomaly Detection System for School Electricity Consumption Data
Information 2017, 8(4), 151; doi:10.3390/info8040151
Received: 29 September 2017 / Revised: 8 November 2017 / Accepted: 16 November 2017 / Published: 20 November 2017
PDF Full-text (5039 KB) | HTML Full-text | XML Full-text
Abstract
Anomaly detection has been widely used in a variety of research and application domains, such as network intrusion detection, insurance/credit card fraud detection, health-care informatics, industrial damage detection, image processing and novel topic detection in text mining. In this paper, we focus on
[...] Read more.
Anomaly detection has been widely used in a variety of research and application domains, such as network intrusion detection, insurance/credit card fraud detection, health-care informatics, industrial damage detection, image processing and novel topic detection in text mining. In this paper, we focus on remote facilities management that identifies anomalous events in buildings by detecting anomalies in building electricity consumption data. We investigated five models within electricity consumption data from different schools to detect anomalies in the data. Furthermore, we proposed a hybrid model that combines polynomial regression and Gaussian distribution, which detects anomalies in the data with 0 false negative and an average precision higher than 91%. Based on the proposed model, we developed a data detection and visualization system for a facilities management company to detect and visualize anomalies in school electricity consumption data. The system is tested and evaluated by facilities managers. According to the evaluation, our system has improved the efficiency of facilities managers to identify anomalies in the data. Full article
(This article belongs to the Special Issue Supporting Technologies and Enablers for Big Data)
Figures

Figure 1

Open AccessArticle Ensemble of Filter-Based Rankers to Guide an Epsilon-Greedy Swarm Optimizer for High-Dimensional Feature Subset Selection
Information 2017, 8(4), 152; doi:10.3390/info8040152
Received: 28 September 2017 / Revised: 19 October 2017 / Accepted: 20 November 2017 / Published: 22 November 2017
PDF Full-text (640 KB) | HTML Full-text | XML Full-text
Abstract
The main purpose of feature subset selection is to remove irrelevant and redundant features from data, so that learning algorithms can be trained by a subset of relevant features. So far, many algorithms have been developed for the feature subset selection, and most
[...] Read more.
The main purpose of feature subset selection is to remove irrelevant and redundant features from data, so that learning algorithms can be trained by a subset of relevant features. So far, many algorithms have been developed for the feature subset selection, and most of these algorithms suffer from two major problems in solving high-dimensional datasets: First, some of these algorithms search in a high-dimensional feature space without any domain knowledge about the feature importance. Second, most of these algorithms are originally designed for continuous optimization problems, but feature selection is a binary optimization problem. To overcome the mentioned weaknesses, we propose a novel hybrid filter-wrapper algorithm, called Ensemble of Filter-based Rankers to guide an Epsilon-greedy Swarm Optimizer (EFR-ESO), for solving high-dimensional feature subset selection. The Epsilon-greedy Swarm Optimizer (ESO) is a novel binary swarm intelligence algorithm introduced in this paper as a novel wrapper. In the proposed EFR-ESO, we extract the knowledge about the feature importance by the ensemble of filter-based rankers and then use this knowledge to weight the feature probabilities in the ESO. Experiments on 14 high-dimensional datasets indicate that the proposed algorithm has excellent performance in terms of both the error rate of the classification and minimizing the number of features. Full article
(This article belongs to the Special Issue Feature Selection for High-Dimensional Data)
Figures

Figure 1

Open AccessArticle A Routing Protocol Based on Received Signal Strength for Underwater Wireless Sensor Networks (UWSNs)
Information 2017, 8(4), 153; doi:10.3390/info8040153
Received: 23 October 2017 / Revised: 17 November 2017 / Accepted: 22 November 2017 / Published: 24 November 2017
PDF Full-text (2663 KB) | HTML Full-text | XML Full-text
Abstract
Underwater wireless sensor networks (UWSNs) are featured by long propagation delay, limited energy, narrow bandwidth, high BER (Bit Error Rate) and variable topology structure. These features make it very difficult to design a short delay and high energy-efficiency routing protocol for UWSNs. In
[...] Read more.
Underwater wireless sensor networks (UWSNs) are featured by long propagation delay, limited energy, narrow bandwidth, high BER (Bit Error Rate) and variable topology structure. These features make it very difficult to design a short delay and high energy-efficiency routing protocol for UWSNs. In this paper, a routing protocol independent of location information is proposed based on received signal strength (RSS), which is called RRSS. In RRSS, a sensor node firstly establishes a vector from the node to a sink node; the length of the vector indicates the RSS of the beacon signal (RSSB) from the sink node. A node selects the next-hop along the vector according to RSSB and the RSS of a hello packet (RSSH). The node nearer to the vector has higher priority to be a candidate next-hop. To avoid data packets being delivered to the neighbor nodes in a void area, a void-avoiding algorithm is introduced. In addition, residual energy is considered when selecting the next-hop. Meanwhile, we establish mathematic models to analyze the robustness and energy efficiency of RRSS. Lastly, we conduct extensive simulations, and the simulation results show RRSS can save energy consumption and decrease end-to-end delay. Full article
Figures

Figure 1

Open AccessArticle Certain Concepts in Intuitionistic Neutrosophic Graph Structures
Information 2017, 8(4), 154; doi:10.3390/info8040154
Received: 2 November 2017 / Revised: 19 November 2017 / Accepted: 19 November 2017 / Published: 25 November 2017
PDF Full-text (680 KB) | HTML Full-text | XML Full-text
Abstract
A graph structure is a generalization of simple graphs. Graph structures are very useful tools for the study of different domains of computational intelligence and computer science. In this research paper, we introduce certain notions of intuitionistic neutrosophic graph structures. We illustrate these
[...] Read more.
A graph structure is a generalization of simple graphs. Graph structures are very useful tools for the study of different domains of computational intelligence and computer science. In this research paper, we introduce certain notions of intuitionistic neutrosophic graph structures. We illustrate these notions by several examples. We investigate some related properties of intuitionistic neutrosophic graph structures. We also present an application of intuitionistic neutrosophic graph structures. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Figures

Figure 1

Open AccessArticle Face Classification Using Color Information
Information 2017, 8(4), 155; doi:10.3390/info8040155
Received: 29 September 2017 / Revised: 26 October 2017 / Accepted: 23 November 2017 / Published: 26 November 2017
PDF Full-text (3095 KB) | HTML Full-text | XML Full-text
Abstract
Color models are widely used in image recognition because they represent significant information. On the other hand, texture analysis techniques have been extensively used for facial feature extraction. In this paper; we extract discriminative features related to facial attributes by utilizing different color
[...] Read more.
Color models are widely used in image recognition because they represent significant information. On the other hand, texture analysis techniques have been extensively used for facial feature extraction. In this paper; we extract discriminative features related to facial attributes by utilizing different color models and texture analysis techniques. Specifically, we propose novel methods for texture analysis to improve classification performance of race and gender. The proposed methods for texture analysis are based on Local Binary Pattern and its derivatives. These texture analysis methods are evaluated for six color models (hue, saturation and intensity value (HSV); L*a*b*; RGB; YCbCr; YIQ; YUV) to investigate the effect of each color model. Further, we configure two combinations of color channels to represent color information suitable for gender and race classification of face images. We perform experiments on publicly available face databases. Experimental results show that the proposed approaches are effective for the classification of gender and race. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessFeature PaperArticle The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence
Information 2017, 8(4), 156; doi:10.3390/info8040156
Received: 31 October 2017 / Revised: 20 November 2017 / Accepted: 22 November 2017 / Published: 27 November 2017
PDF Full-text (266 KB) | HTML Full-text | XML Full-text
Abstract
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the
[...] Read more.
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the Singularity, as well supporters of this notion. The notion of intelligence that advocates of the technological singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence as we will show is not based solely on logical operations and computation, but also includes a long list of other characteristics that are unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle Bidirectional Long Short-Term Memory Network with a Conditional Random Field Layer for Uyghur Part-Of-Speech Tagging
Information 2017, 8(4), 157; doi:10.3390/info8040157
Received: 30 October 2017 / Revised: 23 November 2017 / Accepted: 27 November 2017 / Published: 30 November 2017
PDF Full-text (603 KB) | HTML Full-text | XML Full-text
Abstract
Uyghur is an agglutinative and a morphologically rich language; natural language processing tasks in Uyghur can be a challenge. Word morphology is important in Uyghur part-of-speech (POS) tagging. However, POS tagging performance suffers from error propagation of morphological analyzers. To address this problem,
[...] Read more.
Uyghur is an agglutinative and a morphologically rich language; natural language processing tasks in Uyghur can be a challenge. Word morphology is important in Uyghur part-of-speech (POS) tagging. However, POS tagging performance suffers from error propagation of morphological analyzers. To address this problem, we propose a few models for POS tagging: conditional random fields (CRF), long short-term memory (LSTM), bidirectional LSTM networks (BI-LSTM), LSTM networks with a CRF layer, and BI-LSTM networks with a CRF layer. These models do not depend on stemming and word disambiguation for Uyghur and combine hand-crafted features with neural network models. State-of-the-art performance on Uyghur POS tagging is achieved on test data sets using the proposed approach: 98.41% accuracy on 15 labels and 95.74% accuracy on 64 labels, which are 2.71% and 4% improvements, respectively, over the CRF model results. Using engineered features, our model achieves further improvements of 0.2% (15 labels) and 0.48% (64 labels). The results indicate that the proposed method could be an effective approach for POS tagging in other morphologically rich languages. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle sCwc/sLcc: Highly Scalable Feature Selection Algorithms
Information 2017, 8(4), 159; doi:10.3390/info8040159
Received: 31 October 2017 / Revised: 1 December 2017 / Accepted: 2 December 2017 / Published: 6 December 2017
PDF Full-text (1876 KB) | HTML Full-text | XML Full-text
Abstract
Feature selection is a useful tool for identifying which features, or attributes, of a dataset cause or explain the phenomena that the dataset describes, and improving the efficiency and accuracy of learning algorithms for discovering such phenomena. Consequently, feature selection has been studied
[...] Read more.
Feature selection is a useful tool for identifying which features, or attributes, of a dataset cause or explain the phenomena that the dataset describes, and improving the efficiency and accuracy of learning algorithms for discovering such phenomena. Consequently, feature selection has been studied intensively in machine learning research. However, while feature selection algorithms that exhibit excellent accuracy have been developed, they are seldom used for analysis of high-dimensional data because high-dimensional data usually include too many instances and features, which make traditional feature selection algorithms inefficient. To eliminate this limitation, we tried to improve the run-time performance of two of the most accurate feature selection algorithms known in the literature. The result is two accurate and fast algorithms, namely sCwc and sLcc. Multiple experiments with real social media datasets have demonstrated that our algorithms improve the performance of their original algorithms remarkably. For example, we have two datasets, one with 15,568 instances and 15,741 features, and another with 200,569 instances and 99,672 features. sCwc performed feature selection on these datasets in 1.4 seconds and in 405 seconds, respectively. In addition, sLcc has turned out to be as fast as sCwc on average. This is a remarkable improvement because it is estimated that the original algorithms would need several hours to dozens of days to process the same datasets. In addition, we introduce a fast implementation of our algorithms: sCwc does not require any adjusting parameter, while sLcc requires a threshold parameter, which we can use to control the number of features that the algorithm selects. Full article
(This article belongs to the Special Issue Feature Selection for High-Dimensional Data)
Figures

Figure 1

Open AccessArticle Individual Differences, Self-Efficacy, and Chinese Scientists’ Industry Engagement
Information 2017, 8(4), 160; doi:10.3390/info8040160
Received: 18 October 2017 / Revised: 1 December 2017 / Accepted: 1 December 2017 / Published: 8 December 2017
PDF Full-text (396 KB) | HTML Full-text | XML Full-text
Abstract
Research indicates that non-commercial and informal university–industry interactions, which are defined as academic engagement, account for a larger part and play a more important role than commercialization in academic knowledge transfer in China. This paper aims to explore the effect of Chinese scientists’
[...] Read more.
Research indicates that non-commercial and informal university–industry interactions, which are defined as academic engagement, account for a larger part and play a more important role than commercialization in academic knowledge transfer in China. This paper aims to explore the effect of Chinese scientists’ individual differences on academic engagement via social cognitive theory. This study attempts to provide an interpretation of how individual differences affect Chinese academics’ industrial engagement through self-efficacy. Based on data collection from Chinese universities, these analysis results show that gender, academic rank, industry connections, and previous industrial experience are significantly associated with Chinese scientists’ industry engagement. Furthermore, a scientist’s self-efficacy in industry collaborations is also influenced by these four individual factors. The mediating effects of self-efficacy on the relationship between individual differences and academic engagement are confirmed by empirical analysis results. Implications, limitations, and future research directions are discussed at the end of this paper. Full article
Figures

Figure 1

Open AccessArticle Can Computers Become Conscious, an Essential Condition for the Singularity?
Information 2017, 8(4), 161; doi:10.3390/info8040161 (registering DOI)
Received: 12 November 2017 / Revised: 3 December 2017 / Accepted: 6 December 2017 / Published: 9 December 2017
PDF Full-text (181 KB) | HTML Full-text | XML Full-text
Abstract
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware
[...] Read more.
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware of one’s perceptions and/or of one’s thoughts, it is claimed that computers cannot experience consciousness. Given that it has no sensorium, it cannot have perceptions. In terms of being aware of its thoughts it is argued that being aware of one’s thoughts is basically listening to one’s own internal speech. A computer has no emotions, and hence, no desire to communicate, and without the ability, and/or desire to communicate, it has no internal voice to listen to and hence cannot be aware of its thoughts. In fact, it has no thoughts, because it has no sense of self and thinking is about preserving one’s self. Emotions have a positive effect on the reasoning powers of humans, and therefore, the computer’s lack of emotions is another reason for why computers could never achieve the level of intelligence that a human can, at least, at the current level of the development of computer technology. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)

Review

Jump to: Editorial, Research

Open AccessReview A Survey on Information Diffusion in Online Social Networks: Models and Methods
Information 2017, 8(4), 118; doi:10.3390/info8040118
Received: 16 August 2017 / Revised: 19 September 2017 / Accepted: 22 September 2017 / Published: 29 September 2017
PDF Full-text (1790 KB) | HTML Full-text | XML Full-text
Abstract
By now, personal life has been invaded by online social networks (OSNs) everywhere. They intend to move more and more offline lives to online social networks. Therefore, online social networks can reflect the structure of offline human society. A piece of information can
[...] Read more.
By now, personal life has been invaded by online social networks (OSNs) everywhere. They intend to move more and more offline lives to online social networks. Therefore, online social networks can reflect the structure of offline human society. A piece of information can be exchanged or diffused between individuals in social networks. From this diffusion process, lots of latent information can be mined. It can be used for market predicting, rumor controlling, and opinion monitoring among other things. However, the research of these applications depends on the diffusion models and methods. For this reason, we survey various information diffusion models from recent decades. From a research process view, we divide the diffusion models into two categories—explanatory models and predictive models—in which the former includes epidemics and influence models and the latter includes independent cascade, linear threshold, and game theory models. The purpose of this paper is to investigate the research methods and techniques, and compare them according to the above categories. The whole research structure of the information diffusion models based on our view is given. There is a discussion at the end of each section, detailing related models that are mentioned in the literature. We conclude that these two models are not independent, they always complement each other. Finally, the issues of the social networks research are discussed and summarized, and directions for future study are proposed. Full article
Figures

Figure 1

Open AccessReview Car-to-Pedestrian Communication Safety System Based on the Vehicular Ad-Hoc Network Environment: A Systematic Review
Information 2017, 8(4), 127; doi:10.3390/info8040127
Received: 3 August 2017 / Revised: 27 September 2017 / Accepted: 11 October 2017 / Published: 14 October 2017
PDF Full-text (463 KB) | HTML Full-text | XML Full-text
Abstract
With the unparalleled growth of motor vehicles, traffic accident between pedestrians and vehicles is one of the most serious issues in the word-wild. Plenty of injuries and fatalities are caused by the traffic accidents and crashes. The connected vehicular ad hoc network as
[...] Read more.
With the unparalleled growth of motor vehicles, traffic accident between pedestrians and vehicles is one of the most serious issues in the word-wild. Plenty of injuries and fatalities are caused by the traffic accidents and crashes. The connected vehicular ad hoc network as an emerging approach which has the potential to reduce and even avoid accidents have been focused on by many researchers. A large number of car-to-pedestrian communication safety systems based on the vehicular ad hoc network are researching and developing. However, to our limited knowledge, a systematic review about the car-to-pedestrian communication safety system based on the vehicular ad-hoc network has not be written. The purpose and goal of this review is to systematically evaluate and access the reliability of car-to-pedestrian communication safety system based on the vehicular ad-hoc network environment and provide some recommendations for the future works according to throwing some light on the previous literatures. A quality evaluation was developed through established items and instruments tailored to this review. Future works are needed to focus on developing a valid as well as effective communication safety system based on the vehicular ad hoc network to protect the vulnerable road users. Full article
(This article belongs to the Special Issue Intelligent Transportation Systems)
Figures

Figure 1

Open AccessReview The Current Role of Image Compression Standards in Medical Imaging
Information 2017, 8(4), 131; doi:10.3390/info8040131
Received: 25 September 2017 / Revised: 13 October 2017 / Accepted: 15 October 2017 / Published: 19 October 2017
PDF Full-text (4339 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
With the increasing utilization of medical imaging in clinical practice and the growing dimensions of data volumes generated by various medical imaging modalities, the distribution, storage, and management of digital medical image data sets requires data compression. Over the past few decades, several
[...] Read more.
With the increasing utilization of medical imaging in clinical practice and the growing dimensions of data volumes generated by various medical imaging modalities, the distribution, storage, and management of digital medical image data sets requires data compression. Over the past few decades, several image compression standards have been proposed by international standardization organizations. This paper discusses the current status of these image compression standards in medical imaging applications together with some of the legal and regulatory issues surrounding the use of compression in medical settings. Full article
Figures

Figure 1

Open AccessReview Feature Encodings and Poolings for Action and Event Recognition: A Comprehensive Survey
Information 2017, 8(4), 134; doi:10.3390/info8040134
Received: 23 August 2017 / Revised: 10 October 2017 / Accepted: 24 October 2017 / Published: 29 October 2017
PDF Full-text (602 KB) | HTML Full-text | XML Full-text
Abstract
Action and event recognition in multimedia collections is relevant to progress in cross-disciplinary research areas including computer vision, computational optimization, statistical learning, and nonlinear dynamics. Over the past two decades, action and event recognition has evolved from earlier intervening strategies under controlled environments
[...] Read more.
Action and event recognition in multimedia collections is relevant to progress in cross-disciplinary research areas including computer vision, computational optimization, statistical learning, and nonlinear dynamics. Over the past two decades, action and event recognition has evolved from earlier intervening strategies under controlled environments to recent automatic solutions under dynamic environments, resulting in an imperative requirement to effectively organize spatiotemporal deep features. Consequently, resorting to feature encodings and poolings for action and event recognition in complex multimedia collections is an inevitable trend. The purpose of this paper is to offer a comprehensive survey on the most popular feature encoding and pooling approaches in action and event recognition in recent years by summarizing systematically both underlying theoretical principles and original experimental conclusions of those approaches based on an approach-based taxonomy, so as to provide impetus for future relevant studies. Full article
Figures

Figure 1

Back to Top