Next Issue
Previous Issue

Table of Contents

Information, Volume 8, Issue 4 (December 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-49
Export citation of selected articles as:
Open AccessArticle Uncertain Production Scheduling Based on Fuzzy Theory Considering Utility and Production Rate
Information 2017, 8(4), 158; https://doi.org/10.3390/info8040158
Received: 26 October 2017 / Revised: 25 November 2017 / Accepted: 27 November 2017 / Published: 18 December 2017
PDF Full-text (1191 KB) | HTML Full-text | XML Full-text
Abstract
Handling uncertainty in an appropriate manner during the real operation of a cyber-physical system (CPS) is critical. Uncertain production scheduling as a part of CPS uncertainty issues should attract more attention. In this paper, a Mixed Integer Nonlinear Programming (MINLP) uncertain model for
[...] Read more.
Handling uncertainty in an appropriate manner during the real operation of a cyber-physical system (CPS) is critical. Uncertain production scheduling as a part of CPS uncertainty issues should attract more attention. In this paper, a Mixed Integer Nonlinear Programming (MINLP) uncertain model for batch process is formulated based on a unit-specific event-based continuous-time modeling method. Utility uncertainty and uncertain relationship between production rate and utility supply are described by fuzzy theory. The uncertain scheduling model is converted into deterministic model by mathematical method. Through one numerical example, the accuracy and practicability of the proposed model is proved. Fuzzy scheduling model can supply valuable decision options for enterprise managers to make decision more accurate and practical. The impact and selection of some key parameters of fuzzy scheduling model are elaborated. Full article
Figures

Figure 1

Open AccessArticle Some New Biparametric Distance Measures on Single-Valued Neutrosophic Sets with Applications to Pattern Recognition and Medical Diagnosis
Information 2017, 8(4), 162; https://doi.org/10.3390/info8040162
Received: 29 November 2017 / Revised: 10 December 2017 / Accepted: 11 December 2017 / Published: 15 December 2017
Cited by 9 | PDF Full-text (807 KB) | HTML Full-text | XML Full-text
Abstract
Single-valued neutrosophic sets (SVNSs) handling the uncertainties characterized by truth, indeterminacy, and falsity membership degrees, are a more flexible way to capture uncertainty. In this paper, some new types of distance measures, overcoming the shortcomings of the existing measures, for SVNSs with two
[...] Read more.
Single-valued neutrosophic sets (SVNSs) handling the uncertainties characterized by truth, indeterminacy, and falsity membership degrees, are a more flexible way to capture uncertainty. In this paper, some new types of distance measures, overcoming the shortcomings of the existing measures, for SVNSs with two parameters are proposed along with their proofs. The various desirable relations between the proposed measures have also been derived. A comparison between the proposed and the existing measures has been performed in terms of counter-intuitive cases for showing its validity. The proposed measures have been illustrated with case studies of pattern recognition as well as medical diagnoses, along with the effect of the different parameters on the ordering of the objects. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Open AccessArticle Can Computers Become Conscious, an Essential Condition for the Singularity?
Information 2017, 8(4), 161; https://doi.org/10.3390/info8040161
Received: 12 November 2017 / Revised: 3 December 2017 / Accepted: 6 December 2017 / Published: 9 December 2017
Cited by 1 | PDF Full-text (181 KB) | HTML Full-text | XML Full-text
Abstract
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware
[...] Read more.
Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware of one’s perceptions and/or of one’s thoughts, it is claimed that computers cannot experience consciousness. Given that it has no sensorium, it cannot have perceptions. In terms of being aware of its thoughts it is argued that being aware of one’s thoughts is basically listening to one’s own internal speech. A computer has no emotions, and hence, no desire to communicate, and without the ability, and/or desire to communicate, it has no internal voice to listen to and hence cannot be aware of its thoughts. In fact, it has no thoughts, because it has no sense of self and thinking is about preserving one’s self. Emotions have a positive effect on the reasoning powers of humans, and therefore, the computer’s lack of emotions is another reason for why computers could never achieve the level of intelligence that a human can, at least, at the current level of the development of computer technology. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle Individual Differences, Self-Efficacy, and Chinese Scientists’ Industry Engagement
Information 2017, 8(4), 160; https://doi.org/10.3390/info8040160
Received: 18 October 2017 / Revised: 1 December 2017 / Accepted: 1 December 2017 / Published: 8 December 2017
PDF Full-text (396 KB) | HTML Full-text | XML Full-text
Abstract
Research indicates that non-commercial and informal university–industry interactions, which are defined as academic engagement, account for a larger part and play a more important role than commercialization in academic knowledge transfer in China. This paper aims to explore the effect of Chinese scientists’
[...] Read more.
Research indicates that non-commercial and informal university–industry interactions, which are defined as academic engagement, account for a larger part and play a more important role than commercialization in academic knowledge transfer in China. This paper aims to explore the effect of Chinese scientists’ individual differences on academic engagement via social cognitive theory. This study attempts to provide an interpretation of how individual differences affect Chinese academics’ industrial engagement through self-efficacy. Based on data collection from Chinese universities, these analysis results show that gender, academic rank, industry connections, and previous industrial experience are significantly associated with Chinese scientists’ industry engagement. Furthermore, a scientist’s self-efficacy in industry collaborations is also influenced by these four individual factors. The mediating effects of self-efficacy on the relationship between individual differences and academic engagement are confirmed by empirical analysis results. Implications, limitations, and future research directions are discussed at the end of this paper. Full article
Figures

Figure 1

Open AccessArticle sCwc/sLcc: Highly Scalable Feature Selection Algorithms
Information 2017, 8(4), 159; https://doi.org/10.3390/info8040159
Received: 31 October 2017 / Revised: 1 December 2017 / Accepted: 2 December 2017 / Published: 6 December 2017
PDF Full-text (1876 KB) | HTML Full-text | XML Full-text
Abstract
Feature selection is a useful tool for identifying which features, or attributes, of a dataset cause or explain the phenomena that the dataset describes, and improving the efficiency and accuracy of learning algorithms for discovering such phenomena. Consequently, feature selection has been studied
[...] Read more.
Feature selection is a useful tool for identifying which features, or attributes, of a dataset cause or explain the phenomena that the dataset describes, and improving the efficiency and accuracy of learning algorithms for discovering such phenomena. Consequently, feature selection has been studied intensively in machine learning research. However, while feature selection algorithms that exhibit excellent accuracy have been developed, they are seldom used for analysis of high-dimensional data because high-dimensional data usually include too many instances and features, which make traditional feature selection algorithms inefficient. To eliminate this limitation, we tried to improve the run-time performance of two of the most accurate feature selection algorithms known in the literature. The result is two accurate and fast algorithms, namely sCwc and sLcc. Multiple experiments with real social media datasets have demonstrated that our algorithms improve the performance of their original algorithms remarkably. For example, we have two datasets, one with 15,568 instances and 15,741 features, and another with 200,569 instances and 99,672 features. sCwc performed feature selection on these datasets in 1.4 seconds and in 405 seconds, respectively. In addition, sLcc has turned out to be as fast as sCwc on average. This is a remarkable improvement because it is estimated that the original algorithms would need several hours to dozens of days to process the same datasets. In addition, we introduce a fast implementation of our algorithms: sCwc does not require any adjusting parameter, while sLcc requires a threshold parameter, which we can use to control the number of features that the algorithm selects. Full article
(This article belongs to the Special Issue Feature Selection for High-Dimensional Data)
Figures

Figure 1

Open AccessArticle Bidirectional Long Short-Term Memory Network with a Conditional Random Field Layer for Uyghur Part-Of-Speech Tagging
Information 2017, 8(4), 157; https://doi.org/10.3390/info8040157
Received: 30 October 2017 / Revised: 23 November 2017 / Accepted: 27 November 2017 / Published: 30 November 2017
PDF Full-text (603 KB) | HTML Full-text | XML Full-text
Abstract
Uyghur is an agglutinative and a morphologically rich language; natural language processing tasks in Uyghur can be a challenge. Word morphology is important in Uyghur part-of-speech (POS) tagging. However, POS tagging performance suffers from error propagation of morphological analyzers. To address this problem,
[...] Read more.
Uyghur is an agglutinative and a morphologically rich language; natural language processing tasks in Uyghur can be a challenge. Word morphology is important in Uyghur part-of-speech (POS) tagging. However, POS tagging performance suffers from error propagation of morphological analyzers. To address this problem, we propose a few models for POS tagging: conditional random fields (CRF), long short-term memory (LSTM), bidirectional LSTM networks (BI-LSTM), LSTM networks with a CRF layer, and BI-LSTM networks with a CRF layer. These models do not depend on stemming and word disambiguation for Uyghur and combine hand-crafted features with neural network models. State-of-the-art performance on Uyghur POS tagging is achieved on test data sets using the proposed approach: 98.41% accuracy on 15 labels and 95.74% accuracy on 64 labels, which are 2.71% and 4% improvements, respectively, over the CRF model results. Using engineered features, our model achieves further improvements of 0.2% (15 labels) and 0.48% (64 labels). The results indicate that the proposed method could be an effective approach for POS tagging in other morphologically rich languages. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessFeature PaperArticle The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence
Information 2017, 8(4), 156; https://doi.org/10.3390/info8040156
Received: 31 October 2017 / Revised: 20 November 2017 / Accepted: 22 November 2017 / Published: 27 November 2017
Cited by 4 | PDF Full-text (266 KB) | HTML Full-text | XML Full-text
Abstract
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the
[...] Read more.
Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the Singularity, as well supporters of this notion. The notion of intelligence that advocates of the technological singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence as we will show is not based solely on logical operations and computation, but also includes a long list of other characteristics that are unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle Face Classification Using Color Information
Information 2017, 8(4), 155; https://doi.org/10.3390/info8040155
Received: 29 September 2017 / Revised: 26 October 2017 / Accepted: 23 November 2017 / Published: 26 November 2017
PDF Full-text (3095 KB) | HTML Full-text | XML Full-text
Abstract
Color models are widely used in image recognition because they represent significant information. On the other hand, texture analysis techniques have been extensively used for facial feature extraction. In this paper; we extract discriminative features related to facial attributes by utilizing different color
[...] Read more.
Color models are widely used in image recognition because they represent significant information. On the other hand, texture analysis techniques have been extensively used for facial feature extraction. In this paper; we extract discriminative features related to facial attributes by utilizing different color models and texture analysis techniques. Specifically, we propose novel methods for texture analysis to improve classification performance of race and gender. The proposed methods for texture analysis are based on Local Binary Pattern and its derivatives. These texture analysis methods are evaluated for six color models (hue, saturation and intensity value (HSV); L*a*b*; RGB; YCbCr; YIQ; YUV) to investigate the effect of each color model. Further, we configure two combinations of color channels to represent color information suitable for gender and race classification of face images. We perform experiments on publicly available face databases. Experimental results show that the proposed approaches are effective for the classification of gender and race. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Certain Concepts in Intuitionistic Neutrosophic Graph Structures
Information 2017, 8(4), 154; https://doi.org/10.3390/info8040154
Received: 2 November 2017 / Revised: 19 November 2017 / Accepted: 19 November 2017 / Published: 25 November 2017
PDF Full-text (680 KB) | HTML Full-text | XML Full-text
Abstract
A graph structure is a generalization of simple graphs. Graph structures are very useful tools for the study of different domains of computational intelligence and computer science. In this research paper, we introduce certain notions of intuitionistic neutrosophic graph structures. We illustrate these
[...] Read more.
A graph structure is a generalization of simple graphs. Graph structures are very useful tools for the study of different domains of computational intelligence and computer science. In this research paper, we introduce certain notions of intuitionistic neutrosophic graph structures. We illustrate these notions by several examples. We investigate some related properties of intuitionistic neutrosophic graph structures. We also present an application of intuitionistic neutrosophic graph structures. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Figures

Figure 1

Open AccessArticle A Routing Protocol Based on Received Signal Strength for Underwater Wireless Sensor Networks (UWSNs)
Information 2017, 8(4), 153; https://doi.org/10.3390/info8040153
Received: 23 October 2017 / Revised: 17 November 2017 / Accepted: 22 November 2017 / Published: 24 November 2017
Cited by 1 | PDF Full-text (2663 KB) | HTML Full-text | XML Full-text
Abstract
Underwater wireless sensor networks (UWSNs) are featured by long propagation delay, limited energy, narrow bandwidth, high BER (Bit Error Rate) and variable topology structure. These features make it very difficult to design a short delay and high energy-efficiency routing protocol for UWSNs. In
[...] Read more.
Underwater wireless sensor networks (UWSNs) are featured by long propagation delay, limited energy, narrow bandwidth, high BER (Bit Error Rate) and variable topology structure. These features make it very difficult to design a short delay and high energy-efficiency routing protocol for UWSNs. In this paper, a routing protocol independent of location information is proposed based on received signal strength (RSS), which is called RRSS. In RRSS, a sensor node firstly establishes a vector from the node to a sink node; the length of the vector indicates the RSS of the beacon signal (RSSB) from the sink node. A node selects the next-hop along the vector according to RSSB and the RSS of a hello packet (RSSH). The node nearer to the vector has higher priority to be a candidate next-hop. To avoid data packets being delivered to the neighbor nodes in a void area, a void-avoiding algorithm is introduced. In addition, residual energy is considered when selecting the next-hop. Meanwhile, we establish mathematic models to analyze the robustness and energy efficiency of RRSS. Lastly, we conduct extensive simulations, and the simulation results show RRSS can save energy consumption and decrease end-to-end delay. Full article
Figures

Figure 1

Open AccessArticle Ensemble of Filter-Based Rankers to Guide an Epsilon-Greedy Swarm Optimizer for High-Dimensional Feature Subset Selection
Information 2017, 8(4), 152; https://doi.org/10.3390/info8040152
Received: 28 September 2017 / Revised: 19 October 2017 / Accepted: 20 November 2017 / Published: 22 November 2017
Cited by 1 | PDF Full-text (640 KB) | HTML Full-text | XML Full-text
Abstract
The main purpose of feature subset selection is to remove irrelevant and redundant features from data, so that learning algorithms can be trained by a subset of relevant features. So far, many algorithms have been developed for the feature subset selection, and most
[...] Read more.
The main purpose of feature subset selection is to remove irrelevant and redundant features from data, so that learning algorithms can be trained by a subset of relevant features. So far, many algorithms have been developed for the feature subset selection, and most of these algorithms suffer from two major problems in solving high-dimensional datasets: First, some of these algorithms search in a high-dimensional feature space without any domain knowledge about the feature importance. Second, most of these algorithms are originally designed for continuous optimization problems, but feature selection is a binary optimization problem. To overcome the mentioned weaknesses, we propose a novel hybrid filter-wrapper algorithm, called Ensemble of Filter-based Rankers to guide an Epsilon-greedy Swarm Optimizer (EFR-ESO), for solving high-dimensional feature subset selection. The Epsilon-greedy Swarm Optimizer (ESO) is a novel binary swarm intelligence algorithm introduced in this paper as a novel wrapper. In the proposed EFR-ESO, we extract the knowledge about the feature importance by the ensemble of filter-based rankers and then use this knowledge to weight the feature probabilities in the ESO. Experiments on 14 high-dimensional datasets indicate that the proposed algorithm has excellent performance in terms of both the error rate of the classification and minimizing the number of features. Full article
(This article belongs to the Special Issue Feature Selection for High-Dimensional Data)
Figures

Figure 1

Open AccessArticle A New Anomaly Detection System for School Electricity Consumption Data
Information 2017, 8(4), 151; https://doi.org/10.3390/info8040151
Received: 29 September 2017 / Revised: 8 November 2017 / Accepted: 16 November 2017 / Published: 20 November 2017
PDF Full-text (5039 KB) | HTML Full-text | XML Full-text
Abstract
Anomaly detection has been widely used in a variety of research and application domains, such as network intrusion detection, insurance/credit card fraud detection, health-care informatics, industrial damage detection, image processing and novel topic detection in text mining. In this paper, we focus on
[...] Read more.
Anomaly detection has been widely used in a variety of research and application domains, such as network intrusion detection, insurance/credit card fraud detection, health-care informatics, industrial damage detection, image processing and novel topic detection in text mining. In this paper, we focus on remote facilities management that identifies anomalous events in buildings by detecting anomalies in building electricity consumption data. We investigated five models within electricity consumption data from different schools to detect anomalies in the data. Furthermore, we proposed a hybrid model that combines polynomial regression and Gaussian distribution, which detects anomalies in the data with 0 false negative and an average precision higher than 91%. Based on the proposed model, we developed a data detection and visualization system for a facilities management company to detect and visualize anomalies in school electricity consumption data. The system is tested and evaluated by facilities managers. According to the evaluation, our system has improved the efficiency of facilities managers to identify anomalies in the data. Full article
(This article belongs to the Special Issue Supporting Technologies and Enablers for Big Data)
Figures

Figure 1

Open AccessArticle Investigating the Statistical Distribution of Learning Coverage in MOOCs
Information 2017, 8(4), 150; https://doi.org/10.3390/info8040150
Received: 30 September 2017 / Revised: 17 November 2017 / Accepted: 17 November 2017 / Published: 20 November 2017
PDF Full-text (531 KB) | HTML Full-text | XML Full-text
Abstract
Learners participating in Massive Open Online Courses (MOOC) have a wide range of backgrounds and motivations. Many MOOC learners enroll in the courses to take a brief look; only a few go through the entire content, and even fewer are able to eventually
[...] Read more.
Learners participating in Massive Open Online Courses (MOOC) have a wide range of backgrounds and motivations. Many MOOC learners enroll in the courses to take a brief look; only a few go through the entire content, and even fewer are able to eventually obtain a certificate. We discovered this phenomenon after having examined 92 courses on both xuetangX and edX platforms. More specifically, we found that the learning coverage in many courses—one of the metrics used to estimate the learners’ active engagement with the online courses—observes a Zipf distribution. We apply the maximum likelihood estimation method to fit the Zipf’s law and test our hypothesis using a chi-square test. In the xuetangX dataset, the learning coverage in 53 of 76 courses fits Zipf’s law, but in all of 16 courses on the edX platform, the learning coverage rejects the Zipf’s law. The result from our study is expected to bring insight to the unique learning behavior on MOOC. Full article
(This article belongs to the Special Issue Supporting Technologies and Enablers for Big Data)
Figures

Figure 1

Open AccessArticle NC-TODIM-Based MAGDM under a Neutrosophic Cubic Set Environment
Information 2017, 8(4), 149; https://doi.org/10.3390/info8040149
Received: 19 October 2017 / Revised: 11 November 2017 / Accepted: 14 November 2017 / Published: 18 November 2017
Cited by 3 | PDF Full-text (1647 KB) | HTML Full-text | XML Full-text
Abstract
A neutrosophic cubic set is the hybridization of the concept of a neutrosophic set and an interval neutrosophic set. A neutrosophic cubic set has the capacity to express the hybrid information of both the interval neutrosophic set and the single valued neutrosophic set
[...] Read more.
A neutrosophic cubic set is the hybridization of the concept of a neutrosophic set and an interval neutrosophic set. A neutrosophic cubic set has the capacity to express the hybrid information of both the interval neutrosophic set and the single valued neutrosophic set simultaneously. As newly defined, little research on the operations and applications of neutrosophic cubic sets has been reported in the current literature. In the present paper, we propose the score and accuracy functions for neutrosophic cubic sets and prove their basic properties. We also develop a strategy for ranking of neutrosophic cubic numbers based on the score and accuracy functions. We firstly develop a TODIM (Tomada de decisao interativa e multicritévio) in the neutrosophic cubic set (NC) environment, which we call the NC-TODIM. We establish a new NC-TODIM strategy for solving multi attribute group decision making (MAGDM) in neutrosophic cubic set environment. We illustrate the proposed NC-TODIM strategy for solving a multi attribute group decision making problem to show the applicability and effectiveness of the developed strategy. We also conduct sensitivity analysis to show the impact of ranking order of the alternatives for different values of the attenuation factor of losses for multi-attribute group decision making strategies. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Figures

Graphical abstract

Open AccessArticle Source Code Documentation Generation Using Program Execution
Information 2017, 8(4), 148; https://doi.org/10.3390/info8040148
Received: 30 September 2017 / Revised: 13 November 2017 / Accepted: 14 November 2017 / Published: 17 November 2017
PDF Full-text (291 KB) | HTML Full-text | XML Full-text
Abstract
Automated source code documentation approaches often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we describe DynamiDoc: a simple automated documentation generator based on dynamic analysis. Our representation-based approach
[...] Read more.
Automated source code documentation approaches often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we describe DynamiDoc: a simple automated documentation generator based on dynamic analysis. Our representation-based approach traces the program being executed and records string representations of concrete argument values, a return value and a target object state before and after each method execution. Then, for each method, it generates documentation sentences with examples, such as “When called on [3, 1.2] with element = 3, the object changed to [1.2]”. Advantages and shortcomings of the approach are listed. We also found out that the generated sentences are substantially shorter than the methods they describe. According to our small-scale study, the majority of objects in the generated documentation have their string representations overridden, which further confirms the potential usefulness of our approach. Finally, we propose an alternative, variable-based approach that describes the values of individual member variables, rather than the state of an object as a whole. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle Land Cover Classification from Multispectral Data Using Computational Intelligence Tools: A Comparative Study
Information 2017, 8(4), 147; https://doi.org/10.3390/info8040147
Received: 8 October 2017 / Revised: 7 November 2017 / Accepted: 13 November 2017 / Published: 15 November 2017
Cited by 2 | PDF Full-text (7230 KB) | HTML Full-text | XML Full-text
Abstract
This article discusses how computational intelligence techniques are applied to fuse spectral images into a higher level image of land cover distribution for remote sensing, specifically for satellite image classification. We compare a fuzzy-inference method with two other computational intelligence methods, decision trees
[...] Read more.
This article discusses how computational intelligence techniques are applied to fuse spectral images into a higher level image of land cover distribution for remote sensing, specifically for satellite image classification. We compare a fuzzy-inference method with two other computational intelligence methods, decision trees and neural networks, using a case study of land cover classification from satellite images. Further, an unsupervised approach based on k-means clustering has been also taken into consideration for comparison. The fuzzy-inference method includes training the classifier with a fuzzy-fusion technique and then performing land cover classification using reinforcement aggregation operators. To assess the robustness of the four methods, a comparative study including three years of land cover maps for the district of Mandimba, Niassa province, Mozambique, was undertaken. Our results show that the fuzzy-fusion method performs similarly to decision trees, achieving reliable classifications; neural networks suffer from overfitting; while k-means clustering constitutes a promising technique to identify land cover types from unknown areas. Full article
(This article belongs to the Special Issue Fuzzy Logic for Image Processing)
Figures

Figure 1

Open AccessEditorial Editorial of the Special Issue “Intelligent Transportation Systems”
Information 2017, 8(4), 146; https://doi.org/10.3390/info8040146
Received: 8 November 2017 / Revised: 8 November 2017 / Accepted: 8 November 2017 / Published: 12 November 2017
PDF Full-text (151 KB) | HTML Full-text | XML Full-text
Abstract
Transportation systems are very important in modern life; therefore, massive research efforts have been devoted to this field of study in the recent past. Effective vehicular connectivity techniques can significantly enhance efficiency of travel, reduce traffic incidents and improve safety, and alleviate the
[...] Read more.
Transportation systems are very important in modern life; therefore, massive research efforts have been devoted to this field of study in the recent past. Effective vehicular connectivity techniques can significantly enhance efficiency of travel, reduce traffic incidents and improve safety, and alleviate the impact of congestion, constituting the so-called Intelligent Transportation Systems (ITS) experience.[...] Full article
(This article belongs to the Special Issue Intelligent Transportation Systems)
Open AccessArticle End-to-End Delay Model for Train Messaging over Public Land Mobile Networks
Information 2017, 8(4), 145; https://doi.org/10.3390/info8040145
Received: 16 October 2017 / Revised: 7 November 2017 / Accepted: 8 November 2017 / Published: 11 November 2017
PDF Full-text (4005 KB) | HTML Full-text | XML Full-text
Abstract
Modern train control systems rely on a dedicated radio network for train to ground communications. A number of possible alternatives have been analysed to adopt the European Rail Traffic Management System/European Train Control System (ERTMS/ETCS) control system on local/regional lines to improve transport
[...] Read more.
Modern train control systems rely on a dedicated radio network for train to ground communications. A number of possible alternatives have been analysed to adopt the European Rail Traffic Management System/European Train Control System (ERTMS/ETCS) control system on local/regional lines to improve transport capacity. Among them, a communication system based on public networks (cellular&satellite) provides an interesting, effective and alternative solution to proprietary and expensive radio networks. To analyse performance of this solution, it is necessary to model the end-to-end delay and message loss to fully characterize the message transfer process from train to ground and vice versa. Starting from the results of a railway test campaign over a 300 km railway line for a cumulative 12,000 traveled km in 21 days, in this paper, we derive a statistical model for the end-to-end delay required for delivering messages. In particular, we propose a two states model allowing for reproducing the main behavioral characteristics of the end-to-end delay as observed experimentally. Model formulation has been derived after deep analysis of the recorded experimental data. When it is applied to model a realistic scenario, it allows for explicitly accounting for radio coverage characteristics, the received power level, the handover points along the line and for the serving radio technology. As an example, the proposed model is used to generate the end-to-end delay profile in a realistic scenario. Full article
Figures

Figure 1

Open AccessArticle VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making
Information 2017, 8(4), 144; https://doi.org/10.3390/info8040144
Received: 21 October 2017 / Revised: 7 November 2017 / Accepted: 8 November 2017 / Published: 10 November 2017
Cited by 3 | PDF Full-text (267 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje) method to multiple attribute group decision-making (MAGDM) with interval neutrosophic numbers (INNs). Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information
[...] Read more.
In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje) method to multiple attribute group decision-making (MAGDM) with interval neutrosophic numbers (INNs). Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA) operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Open AccessArticle The Impact of Message Replication on the Performance of Opportunistic Networks for Sensed Data Collection
Information 2017, 8(4), 143; https://doi.org/10.3390/info8040143
Received: 3 October 2017 / Revised: 6 November 2017 / Accepted: 6 November 2017 / Published: 9 November 2017
PDF Full-text (6481 KB) | HTML Full-text | XML Full-text
Abstract
Opportunistic networks (OppNets) provide a scalable solution for collecting delay‑tolerant data from sensors for their respective gateways. Portable handheld user devices contribute significantly to the scalability of OppNets since their number increases according to user population and they closely follow human movement patterns.
[...] Read more.
Opportunistic networks (OppNets) provide a scalable solution for collecting delay‑tolerant data from sensors for their respective gateways. Portable handheld user devices contribute significantly to the scalability of OppNets since their number increases according to user population and they closely follow human movement patterns. Hence, OppNets for sensed data collection are characterised by high node population and degrees of spatial locality inherent to user movement. We study the impact of these characteristics on the performance of existing OppNet message replication techniques. Our findings reveal that the existing replication techniques are not specifically designed to cope with these characteristics. This raises concerns regarding excessive message transmission overhead and throughput degradations due to resource constraints and technological limitations associated with portable handheld user devices. Based on concepts derived from the study, we suggest design guidelines to augment existing message replication techniques. We also follow our design guidelines to propose a message replication technique, namely Locality Aware Replication (LARep). Simulation results show that LARep achieves better network performance under high node population and degrees of spatial locality as compared with existing techniques. Full article
Figures

Figure 1

Open AccessArticle Arabic Handwritten Digit Recognition Based on Restricted Boltzmann Machine and Convolutional Neural Networks
Information 2017, 8(4), 142; https://doi.org/10.3390/info8040142
Received: 14 August 2017 / Revised: 6 November 2017 / Accepted: 8 November 2017 / Published: 9 November 2017
Cited by 1 | PDF Full-text (2062 KB) | HTML Full-text | XML Full-text
Abstract
Handwritten digit recognition is an open problem in computer vision and pattern recognition, and solving this problem has elicited increasing interest. The main challenge of this problem is the design of an efficient method that can recognize the handwritten digits that are submitted
[...] Read more.
Handwritten digit recognition is an open problem in computer vision and pattern recognition, and solving this problem has elicited increasing interest. The main challenge of this problem is the design of an efficient method that can recognize the handwritten digits that are submitted by the user via digital devices. Numerous studies have been proposed in the past and in recent years to improve handwritten digit recognition in various languages. Research on handwritten digit recognition in Arabic is limited. At present, deep learning algorithms are extremely popular in computer vision and are used to solve and address important problems, such as image classification, natural language processing, and speech recognition, to provide computers with sensory capabilities that reach the ability of humans. In this study, we propose a new approach for Arabic handwritten digit recognition by use of restricted Boltzmann machine (RBM) and convolutional neural network (CNN) deep learning algorithms. In particular, we propose an Arabic handwritten digit recognition approach that works in two phases. First, we use the RBM, which is a deep learning technique that can extract highly useful features from raw data, and which has been utilized in several classification problems as a feature extraction technique in the feature extraction phase. Then, the extracted features are fed to an efficient CNN architecture with a deep supervised learning architecture for the training and testing process. In the experiment, we used the CMATERDB 3.3.1 Arabic handwritten digit dataset for training and testing the proposed method. Experimental results show that the proposed method significantly improves the accuracy rate, with accuracy reaching 98.59%. Finally, comparison of our results with those of other studies on the CMATERDB 3.3.1 Arabic handwritten digit dataset shows that our approach achieves the highest accuracy rate. Full article
Figures

Graphical abstract

Open AccessArticle Rate Optimization of Two-Way Relaying with Wireless Information and Power Transfer
Information 2017, 8(4), 141; https://doi.org/10.3390/info8040141
Received: 19 September 2017 / Revised: 10 October 2017 / Accepted: 1 November 2017 / Published: 8 November 2017
PDF Full-text (998 KB) | HTML Full-text | XML Full-text
Abstract
We consider the simultaneous wireless information and power transfer in two-phase decode-and-forward two-way relaying networks, where a relay harvests the energy from the signal to be relayed through either power splitting or time splitting. Here, we formulate the resource allocation problems optimizing the
[...] Read more.
We consider the simultaneous wireless information and power transfer in two-phase decode-and-forward two-way relaying networks, where a relay harvests the energy from the signal to be relayed through either power splitting or time splitting. Here, we formulate the resource allocation problems optimizing the time-phase and signal splitting ratios to maximize the sum rate of the two communicating devices. The joint optimization problems are shown to be convex for both the power splitting and time splitting approaches after some transformation if required to be solvable with an existing solver. To lower the computational complexity, we also present the suboptimal methods optimizing the splitting ratio for the fixed time-phase and derive a closed-form solution for the suboptimal method based on the power splitting. The results demonstrate that the power splitting approaches outperform their time splitting counterparts and the suboptimal power splitting approach provides a performance close to the optimal one while reducing the complexity significantly. Full article
(This article belongs to the Special Issue Wireless Energy Harvesting for Future Wireless Communications)
Figures

Figure 1

Open AccessArticle An Opportunistic Routing for Data Forwarding Based on Vehicle Mobility Association in Vehicular Ad Hoc Networks
Information 2017, 8(4), 140; https://doi.org/10.3390/info8040140
Received: 22 September 2017 / Revised: 19 October 2017 / Accepted: 23 October 2017 / Published: 7 November 2017
Cited by 3 | PDF Full-text (928 KB) | HTML Full-text | XML Full-text
Abstract
Vehicular ad hoc networks (VANETs) have emerged as a new powerful technology for data transmission between vehicles. Efficient data transmission accompanied with low data delay plays an important role in selecting the ideal data forwarding path in VANETs. This paper proposes a new
[...] Read more.
Vehicular ad hoc networks (VANETs) have emerged as a new powerful technology for data transmission between vehicles. Efficient data transmission accompanied with low data delay plays an important role in selecting the ideal data forwarding path in VANETs. This paper proposes a new opportunity routing protocol for data forwarding based on vehicle mobility association (OVMA). With assistance from the vehicle mobility association, data can be forwarded without passing through many extra intermediate nodes. Besides, each vehicle carries the only replica information to record its associated vehicle information, so the routing decision can adapt to the vehicle densities. Simulation results show that the OVMA protocol can extend the network lifetime, improve the performance of data delivery ratio, and reduce the data delay and routing overhead when compared to the other well-known routing protocols. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle Structural and Symbolic Information in the Context of the General Theory of Information
Information 2017, 8(4), 139; https://doi.org/10.3390/info8040139
Received: 26 September 2017 / Revised: 26 October 2017 / Accepted: 1 November 2017 / Published: 6 November 2017
Cited by 1 | PDF Full-text (234 KB) | HTML Full-text | XML Full-text
Abstract
The general theory of information, which includes syntactic, semantic, pragmatic, and many other special theories of information, provides theoretical and practical tools for discerning a very large diversity of different kinds, types, and classes of information. Some of these kinds, types, and classes
[...] Read more.
The general theory of information, which includes syntactic, semantic, pragmatic, and many other special theories of information, provides theoretical and practical tools for discerning a very large diversity of different kinds, types, and classes of information. Some of these kinds, types, and classes are more important and some are less important. Two basic classes are formed by structural and symbolic information. While structural information is intrinsically imbedded in the structure of the corresponding object or domain, symbolic information is represented by symbols, the meaning of which is subject to arbitrary conventions between people. As a result, symbolic information exists only in the context of life, including technical and theoretical constructs created by humans. Structural information is related to any objects, systems, and processes regardless of the existence or presence of life. In this paper, properties of structural and symbolic information are explored in the formal framework of the general theory of information developed by Burgin because this theory offers more powerful instruments for this inquiry. Structural information is further differentiated into inherent, descriptive, and constructive types. Properties of correctness and uniqueness of these types are investigated. In addition, predictive power of symbolic information accumulated in the course of natural evolution is considered. The phenomenon of ritualization is described as a general transition process from structural to symbolic information. Full article
Open AccessFeature PaperArticle MR Brain Image Segmentation: A Framework to Compare Different Clustering Techniques
Information 2017, 8(4), 138; https://doi.org/10.3390/info8040138
Received: 8 October 2017 / Revised: 1 November 2017 / Accepted: 1 November 2017 / Published: 4 November 2017
Cited by 2 | PDF Full-text (1124 KB) | HTML Full-text | XML Full-text
Abstract
In Magnetic Resonance (MR) brain image analysis, segmentation is commonly used for detecting, measuring and analyzing the main anatomical structures of the brain and eventually identifying pathological regions. Brain image segmentation is of fundamental importance since it helps clinicians and researchers to concentrate
[...] Read more.
In Magnetic Resonance (MR) brain image analysis, segmentation is commonly used for detecting, measuring and analyzing the main anatomical structures of the brain and eventually identifying pathological regions. Brain image segmentation is of fundamental importance since it helps clinicians and researchers to concentrate on specific regions of the brain in order to analyze them. However, segmentation of brain images is a difficult task due to high similarities and correlations of intensity among different regions of the brain image. Among various methods proposed in the literature, clustering algorithms prove to be successful tools for image segmentation. In this paper, we present a framework for image segmentation that is devoted to support the expert in identifying different brain regions for further analysis. The framework includes different clustering methods to perform segmentation of MR images. Furthermore, it enables easy comparison of different segmentation results by providing a quantitative evaluation using an entropy-based measure as well as other measures commonly used to evaluate segmentation results. To show the potential of the framework, the implemented clustering methods are compared on simulated T1-weighted MR brain images from the Internet Brain Segmentation Repository (IBSR database) provided with ground truth segmentation. Full article
(This article belongs to the Special Issue Fuzzy Logic for Image Processing)
Figures

Figure 1

Open AccessFeature PaperArticle A Distributed Ledger for Supply Chain Physical Distribution Visibility
Information 2017, 8(4), 137; https://doi.org/10.3390/info8040137
Received: 9 September 2017 / Revised: 21 October 2017 / Accepted: 30 October 2017 / Published: 2 November 2017
PDF Full-text (956 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Supply chains (SC) span many geographies, modes and industries and involve several phases where data flows in both directions from suppliers, manufacturers, distributors, retailers, to customers. This data flow is necessary to support critical business decisions that may impact product cost and market
[...] Read more.
Supply chains (SC) span many geographies, modes and industries and involve several phases where data flows in both directions from suppliers, manufacturers, distributors, retailers, to customers. This data flow is necessary to support critical business decisions that may impact product cost and market share. Current SC information systems are unable to provide validated, pseudo real-time shipment tracking during the distribution phase. This information is available from a single source, often the carrier, and is shared with other stakeholders on an as-needed basis. This paper introduces an independent, crowd-validated, online shipment tracking framework that complements current enterprise-based SC management solutions. The proposed framework consists of a set of private distributed ledgers and a single blockchain public ledger. Each private ledger allows the private sharing of custody events among the trading partners in a given shipment. Privacy is necessary, for example, when trading high-end products or chemical and pharmaceutical products. The second type of ledger is a blockchain public ledger. It consists of the hash code of each private event in addition to monitoring events. The latter provide an independently validated immutable record of the pseudo real-time geolocation status of the shipment from a large number of sources using commuters-sourcing. Full article
Figures

Figure 1

Open AccessArticle Enhancement of Low Contrast Images Based on Effective Space Combined with Pixel Learning
Information 2017, 8(4), 135; https://doi.org/10.3390/info8040135
Received: 19 September 2017 / Revised: 27 October 2017 / Accepted: 27 October 2017 / Published: 1 November 2017
PDF Full-text (12469 KB) | HTML Full-text | XML Full-text
Abstract
Images captured in bad conditions often suffer from low contrast. In this paper, we proposed a simple, but efficient linear restoration model to enhance the low contrast images. The model’s design is based on the effective space of the 3D surface graph of
[...] Read more.
Images captured in bad conditions often suffer from low contrast. In this paper, we proposed a simple, but efficient linear restoration model to enhance the low contrast images. The model’s design is based on the effective space of the 3D surface graph of the image. Effective space is defined as the minimum space containing the 3D surface graph of the image, and the proportion of the pixel value in the effective space is considered to reflect the details of images. The bright channel prior and the dark channel prior are used to estimate the effective space, however, they may cause block artifacts. We designed the pixel learning to solve this problem. Pixel learning takes the input image as the training example and the low frequency component of input as the label to learn (pixel by pixel) based on the look-up table model. The proposed method is very fast and can restore a high-quality image with fine details. The experimental results on a variety of images captured in bad conditions, such as nonuniform light, night, hazy and underwater, demonstrate the effectiveness and efficiency of the proposed method. Full article
Figures

Figure 1a

Open AccessArticle Fuzzy Extractor and Elliptic Curve Based Efficient User Authentication Protocol for Wireless Sensor Networks and Internet of Things
Information 2017, 8(4), 136; https://doi.org/10.3390/info8040136
Received: 21 September 2017 / Revised: 17 October 2017 / Accepted: 24 October 2017 / Published: 30 October 2017
Cited by 1 | PDF Full-text (501 KB) | HTML Full-text | XML Full-text
Abstract
To improve the quality of service and reduce the possibility of security attacks, a secure and efficient user authentication mechanism is required for Wireless Sensor Networks (WSNs) and the Internet of Things (IoT). Session key establishment between the sensor node and the user
[...] Read more.
To improve the quality of service and reduce the possibility of security attacks, a secure and efficient user authentication mechanism is required for Wireless Sensor Networks (WSNs) and the Internet of Things (IoT). Session key establishment between the sensor node and the user is also required for secure communication. In this paper, we perform the security analysis of A.K.Das’s user authentication scheme (given in 2015), Choi et al.’s scheme (given in 2016), and Park et al.’s scheme (given in 2016). The security analysis shows that their schemes are vulnerable to various attacks like user impersonation attack, sensor node impersonation attack and attacks based on legitimate users. Based on the cryptanalysis of these existing protocols, we propose a secure and efficient authenticated session key establishment protocol which ensures various security features and overcomes the drawbacks of existing protocols. The formal and informal security analysis indicates that the proposed protocol withstands the various security vulnerabilities involved in WSNs. The automated validation using AVISPA and Scyther tool ensures the absence of security attacks in our scheme. The logical verification using the Burrows-Abadi-Needham (BAN) logic confirms the correctness of the proposed protocol. Finally, the comparative analysis based on computational overhead and security features of other existing protocol indicate that the proposed user authentication system is secure and efficient. In future, we intend to implement the proposed protocol in real-world applications of WSNs and IoT. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessReview Feature Encodings and Poolings for Action and Event Recognition: A Comprehensive Survey
Information 2017, 8(4), 134; https://doi.org/10.3390/info8040134
Received: 23 August 2017 / Revised: 10 October 2017 / Accepted: 24 October 2017 / Published: 29 October 2017
PDF Full-text (602 KB) | HTML Full-text | XML Full-text
Abstract
Action and event recognition in multimedia collections is relevant to progress in cross-disciplinary research areas including computer vision, computational optimization, statistical learning, and nonlinear dynamics. Over the past two decades, action and event recognition has evolved from earlier intervening strategies under controlled environments
[...] Read more.
Action and event recognition in multimedia collections is relevant to progress in cross-disciplinary research areas including computer vision, computational optimization, statistical learning, and nonlinear dynamics. Over the past two decades, action and event recognition has evolved from earlier intervening strategies under controlled environments to recent automatic solutions under dynamic environments, resulting in an imperative requirement to effectively organize spatiotemporal deep features. Consequently, resorting to feature encodings and poolings for action and event recognition in complex multimedia collections is an inevitable trend. The purpose of this paper is to offer a comprehensive survey on the most popular feature encoding and pooling approaches in action and event recognition in recent years by summarizing systematically both underlying theoretical principles and original experimental conclusions of those approaches based on an approach-based taxonomy, so as to provide impetus for future relevant studies. Full article
Figures

Figure 1

Open AccessArticle Bi-Objective Economic Dispatch of Micro Energy Internet Incorporating Energy Router
Information 2017, 8(4), 133; https://doi.org/10.3390/info8040133
Received: 10 September 2017 / Revised: 9 October 2017 / Accepted: 10 October 2017 / Published: 26 October 2017
PDF Full-text (1119 KB) | HTML Full-text | XML Full-text
Abstract
Integration of different energy networks will increase additional flexibility to system operation. The key component in such a coupled infrastructure is the energy router, which plays an important role in energy transition and storage to smoothing the prediction error both in renewables and
[...] Read more.
Integration of different energy networks will increase additional flexibility to system operation. The key component in such a coupled infrastructure is the energy router, which plays an important role in energy transition and storage to smoothing the prediction error both in renewables and load. The router has the multi-carrier energy generation capability, and builds physical linkages among the power network, heat network, and other networks in the micro energy internet. The economic dispatch problem of the micro energy internet is formulated as a bi-objective optimization problem. Golden section search method is adopted to locate a compromising solution in the sense of Nash Bargaining. Case studies on a typical test system verify the effectiveness of the proposed bi-objective dispatch model and solution method. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Back to Top