Next Issue
Previous Issue

Table of Contents

Information, Volume 9, Issue 1 (January 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-24
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Special Issue on Fuzzy Logic for Image Processing
Information 2018, 9(1), 3; doi:10.3390/info9010003
Received: 24 December 2017 / Revised: 24 December 2017 / Accepted: 24 December 2017 / Published: 27 December 2017
PDF Full-text (152 KB) | HTML Full-text | XML Full-text
Abstract
The increasing availability of huge image collections in different application fields, such as medical diagnosis, remote sensing, transmission and encoding, machine/robot vision, and video processing, microscopic imaging has pressed the need, in the last few last years, for the development of efficient techniques
[...] Read more.
The increasing availability of huge image collections in different application fields, such as medical diagnosis, remote sensing, transmission and encoding, machine/robot vision, and video processing, microscopic imaging has pressed the need, in the last few last years, for the development of efficient techniques capable of managing and processing large collection of image data [...] Full article
(This article belongs to the Special Issue Fuzzy Logic for Image Processing)
Open AccessEditorial Acknowledgement to Reviewers of Information in 2017
Information 2018, 9(1), 14; doi:10.3390/info9010014
Received: 9 January 2018 / Revised: 9 January 2018 / Accepted: 9 January 2018 / Published: 9 January 2018
PDF Full-text (432 KB) | HTML Full-text | XML Full-text
Abstract
Peer review is an essential part in the publication process, ensuring that Information maintains high quality standards for its published papers. In 2017, a total of 153 papers were published in the journal.[...] Full article

Research

Jump to: Editorial, Review

Open AccessArticle EmoSpell, a Morphological and Emotional Word Analyzer
Information 2018, 9(1), 1; doi:10.3390/info9010001
Received: 29 September 2017 / Revised: 26 November 2017 / Accepted: 7 December 2017 / Published: 3 January 2018
PDF Full-text (1149 KB) | HTML Full-text | XML Full-text
Abstract
The analysis of sentiments, emotions, and opinions in texts is increasingly important in the current digital world. The existing lexicons with emotional annotations for the Portuguese language are oriented to polarities, classifying words as positive, negative, or neutral. To identify the emotional load
[...] Read more.
The analysis of sentiments, emotions, and opinions in texts is increasingly important in the current digital world. The existing lexicons with emotional annotations for the Portuguese language are oriented to polarities, classifying words as positive, negative, or neutral. To identify the emotional load intended by the author, it is necessary to also categorize the emotions expressed by individual words. EmoSpell is an extension of a morphological analyzer with semantic annotations of the emotional value of words. It uses Jspell as the morphological analyzer and a new dictionary with emotional annotations. This dictionary incorporates the lexical base EMOTAIX.PT, which classifies words based on three different levels of emotions—global, specific, and intermediate. This paper describes the generation of the EmoSpell dictionary using three sources: the Jspell Portuguese dictionary and the lexical bases EMOTAIX.PT and SentiLex-PT. Additionally, this paper details the Web application and Web service that exploit this dictionary. It also presents a validation of the proposed approach using a corpus of student texts with different emotional loads. The validation compares the analyses provided by EmoSpell with the mentioned emotional lexical bases on the ability to recognize emotional words and extract the dominant emotion from a text. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle Entity Attribute Value Style Modeling Approach for Archetype Based Data
Information 2018, 9(1), 2; doi:10.3390/info9010002
Received: 31 October 2017 / Revised: 14 December 2017 / Accepted: 20 December 2017 / Published: 29 December 2017
PDF Full-text (4049 KB) | HTML Full-text | XML Full-text
Abstract
Entity Attribute Value (EAV) storage model is extensively used to manage healthcare data in existing systems, however it lacks search efficiency. This study examines an entity attribute value style modeling approach for standardized Electronic Health Records (EHRs) database. It sustains qualities of EAV
[...] Read more.
Entity Attribute Value (EAV) storage model is extensively used to manage healthcare data in existing systems, however it lacks search efficiency. This study examines an entity attribute value style modeling approach for standardized Electronic Health Records (EHRs) database. It sustains qualities of EAV (i.e., handling sparseness and frequent schema evolution) and provides better performance for queries in comparison to EAV. It is termed as the Two Dimensional Entity Attribute Value (2D EAV) model. Support for ad-hoc queries is provided through a user interface for better user-interaction. 2D EAV focuses on how to handle template-centric queries as well as other health query scenarios. 2D EAV is analyzed (in terms of minimum non-null density) to make a judgment about the adoption of 2D EAV over n-ary storage model of RDBMS. The primary aim of current research is to handle sparseness, frequent schema evolution, and efficient query support altogether for standardized EHRs. 2D EAV will benefit data administrators to handle standardized heterogeneous data that demands high search efficiency. It will also benefit both skilled and semi-skilled database users (such as, doctors, nurses, and patients) by providing a global semantic interoperable mechanism of data retrieval. Full article
(This article belongs to the Special Issue Information Architecture)
Figures

Figure 1

Open AccessArticle An RTT-Aware Virtual Machine Placement Method
Information 2018, 9(1), 4; doi:10.3390/info9010004
Received: 29 November 2017 / Revised: 26 December 2017 / Accepted: 26 December 2017 / Published: 29 December 2017
PDF Full-text (3031 KB) | HTML Full-text | XML Full-text
Abstract
Virtualization is a key technology for mobile cloud computing (MCC) and the virtual machine (VM) is a core component of virtualization. VM provides a relatively independent running environment for different applications. Therefore, the VM placement problem focuses on how to place VMs on
[...] Read more.
Virtualization is a key technology for mobile cloud computing (MCC) and the virtual machine (VM) is a core component of virtualization. VM provides a relatively independent running environment for different applications. Therefore, the VM placement problem focuses on how to place VMs on optimal physical machines, which ensures efficient use of resources and the quality of service, etc. Most previous work focuses on energy consumption, network traffic between VMs and so on and rarely consider the delay for end users’ requests. In contrast, the latency between requests and VMs is considered in this paper for the scenario of optimal VM placement in MCC. In order to minimize average RTT for all requests, the round-trip time (RTT) is first used as the metric for the latency of requests. Based on our proposed RTT metric, an RTT-Aware VM placement algorithm is then proposed to minimize the average RTT. Furthermore, the case in which one of the core switches does not work is considered. A VM rescheduling algorithm is proposed to keep the average RTT lower and reduce the fluctuation of the average RTT. Finally, in the simulation study, our algorithm shows its advantage over existing methods, including random placement, the traffic-aware VM placement algorithm and the remaining utilization-aware algorithm. Full article
Figures

Figure 1

Open AccessArticle A Comparison Study of Kernel Functions in the Support Vector Machine and Its Application for Termite Detection
Information 2018, 9(1), 5; doi:10.3390/info9010005
Received: 10 December 2017 / Revised: 27 December 2017 / Accepted: 28 December 2017 / Published: 2 January 2018
PDF Full-text (3224 KB) | HTML Full-text | XML Full-text
Abstract
Termites are the most destructive pests and their attacks significantly impact the quality of wooden buildings. Due to their cryptic behavior, it is rarely apparent from visual observation that a termite infestation is active and that wood damage is occurring. Based on the
[...] Read more.
Termites are the most destructive pests and their attacks significantly impact the quality of wooden buildings. Due to their cryptic behavior, it is rarely apparent from visual observation that a termite infestation is active and that wood damage is occurring. Based on the phenomenon of acoustic signals generated by termites when attacking wood, we proposed a practical framework to detect termites nondestructively, i.e., by using the acoustic signals extraction. This method has the pros to maintain the quality of wood products and prevent higher termite attacks. In this work, we inserted 220 subterranean termites into a pine wood for feeding activity and monitored its acoustic signal. The two acoustic features (i.e., energy and entropy) derived from the time domain were used for this study’s analysis. Furthermore, the support vector machine (SVM) algorithm with different kernel functions (i.e., linear, radial basis function, sigmoid and polynomial) were employed to recognize the termites’ acoustic signal. In addition, the area under a receiver operating characteristic curve (AUC) was also adopted to analyze and improve the performance results. Based on the numerical analysis, the SVM with polynomial kernel function achieves the best classification accuracy of 0.9188. Full article
Figures

Open AccessArticle Gene Selection for Microarray Cancer Data Classification by a Novel Rule-Based Algorithm
Information 2018, 9(1), 6; doi:10.3390/info9010006
Received: 31 October 2017 / Revised: 26 December 2017 / Accepted: 27 December 2017 / Published: 2 January 2018
PDF Full-text (3366 KB) | HTML Full-text | XML Full-text
Abstract
Due to the disproportionate difference between the number of genes and samples, microarray data analysis is considered an extremely difficult task in sample classification. Feature selection mitigates this problem by removing irrelevant and redundant genes from data. In this paper, we propose a
[...] Read more.
Due to the disproportionate difference between the number of genes and samples, microarray data analysis is considered an extremely difficult task in sample classification. Feature selection mitigates this problem by removing irrelevant and redundant genes from data. In this paper, we propose a new methodology for feature selection that aims to detect relevant, non-redundant and interacting genes by analysing the feature value space instead of the feature space. Following this methodology, we also propose a new feature selection algorithm, namely Pavicd (Probabilistic Attribute-Value for Class Distinction). Experiments in fourteen microarray cancer datasets reveal that Pavicd obtains the best performance in terms of running time and classification accuracy when using Ripper-k and C4.5 as classifiers. When using SVM (Support Vector Machine), the Gbc (Genetic Bee Colony) wrapper algorithm gets the best results. However, Pavicd is significantly faster. Full article
(This article belongs to the Special Issue Feature Selection for High-Dimensional Data)
Figures

Figure 1

Open AccessArticle The Analysis of the Internet Development Based on the Complex Model of the Discursive Space
Information 2018, 9(1), 7; doi:10.3390/info9010007
Received: 28 October 2017 / Revised: 3 December 2017 / Accepted: 4 December 2017 / Published: 3 January 2018
PDF Full-text (1061 KB) | HTML Full-text | XML Full-text
Abstract
This paper aims to present a new way of understanding and elaborating the current state of reality which has a substantial dependency on technology. An example of such a relatively mature technology is the Internet. The discursive space (DS) is a proposed new
[...] Read more.
This paper aims to present a new way of understanding and elaborating the current state of reality which has a substantial dependency on technology. An example of such a relatively mature technology is the Internet. The discursive space (DS) is a proposed new cognitive tool, which can be used for this purpose. DS is constructed based on the idea of the discourse and the idea of the configuration space, state space, phase space, and space-time, etc. Discourse is understood as the representation/creation of reality, which means that it can also be understood as a carrier of knowledge. The configuration space, etc., is a tool elaborated in the field of physics for describing complex phenomena. DS is a tool for acquiring knowledge analogous to formal information processes, but is based on another interpretation of the existence of knowledge. This interpretation sees knowledge as a social construction and not as an independent entity, which can be described by rigorous formal procedure. The necessity of such tools comes inter alia from management, particularly from humanistic management, where it can describe the Internet as a dynamic and complex environment for organizations that understand it as a basic unit of the social space of reality. Full article
Figures

Figure 1

Open AccessFeature PaperArticle SeMiner: A Flexible Sequence Miner Method to Forecast Solar Time Series
Information 2018, 9(1), 8; doi:10.3390/info9010008
Received: 12 December 2017 / Revised: 29 December 2017 / Accepted: 2 January 2018 / Published: 4 January 2018
PDF Full-text (1942 KB) | HTML Full-text | XML Full-text
Abstract
X-rays emitted by the Sun can damage electronic devices of spaceships, satellites, positioning systems and electricity distribution grids. Thus, the forecasting of solar X-rays is needed to warn organizations and mitigate undesirable effects. Traditional mining classification methods categorize observations into labels, and we
[...] Read more.
X-rays emitted by the Sun can damage electronic devices of spaceships, satellites, positioning systems and electricity distribution grids. Thus, the forecasting of solar X-rays is needed to warn organizations and mitigate undesirable effects. Traditional mining classification methods categorize observations into labels, and we aim to extend this approach to predict future X-ray levels. Therefore, we developed the “SeMiner” method, which allows the prediction of future events. “SeMiner” processes X-rays into sequences employing a new algorithm called “Series-to-Sequence” (SS). It employs a sliding window approach configured by a specialist. Then, the sequences are submitted to a classifier to generate a model that predicts X-ray levels. An optimized version of “SS” was also developed using parallelization techniques and Graphical Processing Units, in order to speed up the entire forecasting process. The obtained results indicate that “SeMiner” is well-suited to predict solar X-rays and solar flares within the defined time range. It reached more than 90% of accuracy for a 2-day forecast, and more than 80% of True Positive (TPR) and True Negative (TNR) rates predicting X-ray levels. It also reached an accuracy of 72.7%, with a TPR of 70.9% and TNR of 79.7% when predicting solar flares. Moreover, the optimized version of “SS” proved to be 4.36 faster than its initial version. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle Joint Cell Association and User Scheduling in Carrier Aggregated Heterogeneous Networks
Information 2018, 9(1), 9; doi:10.3390/info9010009
Received: 6 December 2017 / Revised: 31 December 2017 / Accepted: 31 December 2017 / Published: 5 January 2018
PDF Full-text (343 KB) | HTML Full-text | XML Full-text
Abstract
This paper focuses on the network capacity maximization problem through joint cell association and user scheduling with multiple carrier aggregation (CA) in the heterogeneous networks (HetNets). For the downlink transmission, the proposed joint maximization problem is reformulated from single data flow into multiple
[...] Read more.
This paper focuses on the network capacity maximization problem through joint cell association and user scheduling with multiple carrier aggregation (CA) in the heterogeneous networks (HetNets). For the downlink transmission, the proposed joint maximization problem is reformulated from single data flow into multiple data flow through carrier aggregated HetNets, in which the users could associate with BSs on more than one carrier band. Such a flexible joint maximization problem could be solved by convex optimization solutions with reasonable complexity. Numerical analysis has confirmed the performance advantages of the proposed multi-flow solution under different carrier aggregation deployment. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Generalized Single-Valued Neutrosophic Hesitant Fuzzy Prioritized Aggregation Operators and Their Applications to Multiple Criteria Decision-Making
Information 2018, 9(1), 10; doi:10.3390/info9010010
Received: 11 December 2017 / Revised: 29 December 2017 / Accepted: 2 January 2018 / Published: 5 January 2018
Cited by 2 | PDF Full-text (301 KB) | HTML Full-text | XML Full-text
Abstract
Single-valued neutrosophic hesitant fuzzy set (SVNHFS) is a combination of single-valued neutrosophic set and hesitant fuzzy set, and its aggregation tools play an important role in the multiple criteria decision-making (MCDM) process. This paper investigates the MCDM problems in which the criteria under
[...] Read more.
Single-valued neutrosophic hesitant fuzzy set (SVNHFS) is a combination of single-valued neutrosophic set and hesitant fuzzy set, and its aggregation tools play an important role in the multiple criteria decision-making (MCDM) process. This paper investigates the MCDM problems in which the criteria under SVNHF environment are in different priority levels. First, the generalized single-valued neutrosophic hesitant fuzzy prioritized weighted average operator and generalized single-valued neutrosophic hesitant fuzzy prioritized weighted geometric operator are developed based on the prioritized average operator. Second, some desirable properties and special cases of the proposed operators are discussed in detail. Third, an approach combined with the proposed operators and the score function of single-valued neutrosophic hesitant fuzzy element is constructed to solve MCDM problems. Finally, an example of investment selection is provided to illustrate the validity and rationality of the proposed method. Full article
(This article belongs to the Special Issue Neutrosophic Information Theory and Applications)
Open AccessArticle Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder
Information 2018, 9(1), 11; doi:10.3390/info9010011
Received: 4 December 2017 / Revised: 28 December 2017 / Accepted: 2 January 2018 / Published: 5 January 2018
PDF Full-text (3983 KB) | HTML Full-text | XML Full-text
Abstract
Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel
[...] Read more.
Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Automata Approach to XML Data Indexing
Information 2018, 9(1), 12; doi:10.3390/info9010012
Received: 1 December 2017 / Revised: 29 December 2017 / Accepted: 3 January 2018 / Published: 6 January 2018
PDF Full-text (1534 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The internal structure of XML documents can be viewed as a tree. Trees are among the fundamental and well-studied data structures in computer science. They express a hierarchical structure and are widely used in many applications. This paper focuses on the problem of
[...] Read more.
The internal structure of XML documents can be viewed as a tree. Trees are among the fundamental and well-studied data structures in computer science. They express a hierarchical structure and are widely used in many applications. This paper focuses on the problem of processing tree data structures; particularly, it studies the XML index problem. Although there exist many state-of-the-art methods, the XML index problem still belongs to the active research areas. However, existing methods usually lack clear references to a systematic approach to the standard theory of formal languages and automata. Therefore, we present some new methods solving the XML index problem using the automata theory. These methods are simple and allow one to efficiently process a small subset of XPath. Thus, having an XML data structure, our methods can be used efficiently as auxiliary data structures that enable answering a particular set of queries, e.g., XPath queries using any combination of the child and descendant-or-self axes. Given an XML tree model with n nodes, the searching phase uses the index, reads an input query of size m, finds the answer in time O ( m ) and does not depend on the size of the original XML document. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle Lightweight S-Box Architecture for Secure Internet of Things
Information 2018, 9(1), 13; doi:10.3390/info9010013
Received: 12 December 2017 / Revised: 22 December 2017 / Accepted: 5 January 2018 / Published: 8 January 2018
PDF Full-text (2712 KB) | HTML Full-text | XML Full-text
Abstract
Lightweight cryptographic solutions are required to guarantee the security of Internet of Things (IoT) pervasiveness. Cryptographic primitives mandate a non-linear operation. The design of a lightweight, secure, non-linear 4 × 4 substitution box (S-box) suited to Internet of Things (IoT) applications is proposed
[...] Read more.
Lightweight cryptographic solutions are required to guarantee the security of Internet of Things (IoT) pervasiveness. Cryptographic primitives mandate a non-linear operation. The design of a lightweight, secure, non-linear 4 × 4 substitution box (S-box) suited to Internet of Things (IoT) applications is proposed in this work. The structure of the 4 × 4 S-box is devised in the finite fields GF (24) and GF ((22)2). The finite field S-box is realized by multiplicative inversion followed by an affine transformation. The multiplicative inverse architecture employs Euclidean algorithm for inversion in the composite field GF ((22)2). The affine transformation is carried out in the field GF (24). The isomorphic mapping between the fields GF (24) and GF ((22)2) is based on the primitive element in the higher order field GF (24). The recommended finite field S-box architecture is combinational and enables sub-pipelining. The linear and differential cryptanalysis validates that the proposed S-box is within the maximal security bound. It is observed that there is 86.5% lesser gate count for the realization of sub field operations in the composite field GF ((22)2) compared to the GF (24) field. In the PRESENT lightweight cipher structure with the basic loop architecture, the proposed S-box demonstrates 5% reduction in the gate equivalent area over the look-up-table-based S-box with TSMC 180 nm technology. Full article
(This article belongs to the Special Issue Security in the Internet of Things)
Figures

Figure 1

Open AccessArticle Reliability Analysis of the High-speed Train Bearing Based on Wiener Process
Information 2018, 9(1), 15; doi:10.3390/info9010015
Received: 23 October 2017 / Revised: 2 January 2018 / Accepted: 5 January 2018 / Published: 12 January 2018
PDF Full-text (1544 KB) | HTML Full-text | XML Full-text
Abstract
Because of the existence of uncertainty measurement in the process of bearings degradation, it is difficult to carry out the reliability analysis. The random performance degradation model is used to analyze the reliability life of high-speed train bearing with the characteristics of slow
[...] Read more.
Because of the existence of uncertainty measurement in the process of bearings degradation, it is difficult to carry out the reliability analysis. The random performance degradation model is used to analyze the reliability life of high-speed train bearing with the characteristics of slow degradation process and relatively stable degradation path. The unknown coefficients are viewed as random variables in the model. According to the analysis to a bearing testing data, the reliability analysis of the bearing life is finally completed. The results show that the method can assess reliability of the bearing life with zero failure by taking full advantage of the performance degradation data in the small sample size. Full article
Figures

Figure 1

Open AccessArticle Improving Particle Swarm Optimization Based on Neighborhood and Historical Memory for Training Multi-Layer Perceptron
Information 2018, 9(1), 16; doi:10.3390/info9010016
Received: 11 November 2017 / Revised: 21 December 2017 / Accepted: 10 January 2018 / Published: 12 January 2018
PDF Full-text (3553 KB) | HTML Full-text | XML Full-text
Abstract
Many optimization problems can be found in scientific and engineering fields. It is a challenge for researchers to design efficient algorithms to solve these optimization problems. The Particle swarm optimization (PSO) algorithm, which is inspired by the social behavior of bird flocks, is
[...] Read more.
Many optimization problems can be found in scientific and engineering fields. It is a challenge for researchers to design efficient algorithms to solve these optimization problems. The Particle swarm optimization (PSO) algorithm, which is inspired by the social behavior of bird flocks, is a global stochastic method. However, a monotonic and static learning model, which is applied for all particles, limits the exploration ability of PSO. To overcome the shortcomings, we propose an improving particle swarm optimization algorithm based on neighborhood and historical memory (PSONHM). In the proposed algorithm, every particle takes into account the experience of its neighbors and its competitors when updating its position. The crossover operation is employed to enhance the diversity of the population. Furthermore, a historical memory Mw is used to generate new inertia weight with a parameter adaptation mechanism. To verify the effectiveness of the proposed algorithm, experiments are conducted with CEC2014 test problems on 30 dimensions. Finally, two classification problems are employed to investigate the efficiencies of PSONHM in training Multi-Layer Perceptron (MLP). The experimental results indicate that the proposed PSONHM can effectively solve the global optimization problems. Full article
Figures

Figure 1

Open AccessArticle CSS Preprocessing: Tools and Automation Techniques
Information 2018, 9(1), 17; doi:10.3390/info9010017
Received: 12 November 2017 / Revised: 5 January 2018 / Accepted: 10 January 2018 / Published: 12 January 2018
PDF Full-text (299 KB) | HTML Full-text | XML Full-text
Abstract
Cascading Style Sheets (CSS) is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development
[...] Read more.
Cascading Style Sheets (CSS) is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development has changed dramatically with the appearance of several languages and tools aiming to help developers build clean, modular and performance-aware CSS. These new approaches give developers mechanisms to preprocess CSS rules through the use of programming constructs, defined as CSS preprocessors, with the ultimate goal to bring those missing constructs to the CSS realm and to foster stylesheets structured programming. At the same time, a new set of tools appeared, defined as postprocessors, for extension and automation purposes covering a broad set of features ranging from identifying unused and duplicate code to applying vendor prefixes. With all these tools and techniques in hands, developers need to provide a consistent workflow to foster CSS modular coding. This paper aims to present an introductory survey on the CSS processors. The survey gathers information on a specific set of processors, categorizes them and compares their features regarding a set of predefined criteria such as: maturity, coverage and performance. Finally, we propose a basic set of best practices in order to setup a simple and pragmatic styling code workflow. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Figures

Figure 1

Open AccessArticle Histopathological Breast-Image Classification Using Local and Frequency Domains by Convolutional Neural Network
Information 2018, 9(1), 19; doi:10.3390/info9010019
Received: 18 December 2017 / Revised: 7 January 2018 / Accepted: 12 January 2018 / Published: 16 January 2018
PDF Full-text (3023 KB) | HTML Full-text | XML Full-text
Abstract
Identification of the malignancy of tissues from Histopathological images has always been an issue of concern to doctors and radiologists. This task is time-consuming, tedious and moreover very challenging. Success in finding malignancy from Histopathological images primarily depends on long-term experience, though sometimes
[...] Read more.
Identification of the malignancy of tissues from Histopathological images has always been an issue of concern to doctors and radiologists. This task is time-consuming, tedious and moreover very challenging. Success in finding malignancy from Histopathological images primarily depends on long-term experience, though sometimes experts disagree on their decisions. However, Computer Aided Diagnosis (CAD) techniques help the radiologist to give a second opinion that can increase the reliability of the radiologist’s decision. Among the different image analysis techniques, classification of the images has always been a challenging task. Due to the intense complexity of biomedical images, it is always very challenging to provide a reliable decision about an image. The state-of-the-art Convolutional Neural Network (CNN) technique has had great success in natural image classification. Utilizing advanced engineering techniques along with the CNN, in this paper, we have classified a set of Histopathological Breast-Cancer (BC) images utilizing a state-of-the-art CNN model containing a residual block. Conventional CNN operation takes raw images as input and extracts the global features; however, the object oriented local features also contain significant information—for example, the Local Binary Pattern (LBP) represents the effective textural information, Histogram represent the pixel strength distribution, Contourlet Transform (CT) gives much detailed information about the smoothness about the edges, and Discrete Fourier Transform (DFT) derives frequency-domain information from the image. Utilizing these advantages, along with our proposed novel CNN model, we have examined the performance of the novel CNN model as Histopathological image classifier. To do so, we have introduced five cases: (a) Convolutional Neural Network Raw Image (CNN-I); (b) Convolutional Neural Network CT Histogram (CNN-CH); (c) Convolutional Neural Network CT LBP (CNN-CL); (d) Convolutional Neural Network Discrete Fourier Transform (CNN-DF); (e) Convolutional Neural Network Discrete Cosine Transform (CNN-DC). We have performed our experiments on the BreakHis image dataset. The best performance is achieved when we utilize the CNN-CH model on a 200× dataset that provides Accuracy, Sensitivity, False Positive Rate, False Negative Rate, Recall Value, Precision and F-measure of 92.19%, 94.94%, 5.07%, 1.70%, 98.20%, 98.00% and 98.00%, respectively. Full article
(This article belongs to the Special Issue Information-Centered Healthcare)
Figures

Figure 1

Open AccessArticle Usability as the Key Factor to the Design of a Web Server for the CReF Protein Structure Predictor: The wCReF
Information 2018, 9(1), 20; doi:10.3390/info9010020
Received: 20 December 2017 / Revised: 11 January 2018 / Accepted: 13 January 2018 / Published: 17 January 2018
PDF Full-text (4737 KB) | HTML Full-text | XML Full-text
Abstract
Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein
[...] Read more.
Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein structures, one of the main unsolved problems in bioinformatics. The challenge is to understand the relationship between the amino acid sequence of a protein and its three-dimensional structure, which is related to the function of these macromolecules. This article is an extended version of the article wCReF: The Web Server for the Central Residue Fragment-based Method (CReF) Protein Structure Predictor, published in the 14th International Conference on Information Technology: New Generations. In the first version, we presented the wCReF, a protein structure prediction server for the central residue fragment-based method. The wCReF interface was developed with a focus on usability and user interaction. With this tool, users can enter the amino acid sequence of their target protein and obtain its approximate 3D structure without the need to install all the multitude of necessary tools. In this extended version, we present the design process of the prediction server in detail, which includes: (A) identification of user needs: aiming at understanding the features of a protein structure prediction server, the end user profiles and the commonly-performed tasks; (B) server usability inspection: in order to define wCReF’s requirements and features, we have used heuristic evaluation guided by experts in both the human-computer interaction and bioinformatics domain areas, applied to the protein structure prediction servers I-TASSER, QUARK and Robetta; as a result, changes were found in all heuristics resulting in 89 usability problems; (C) software requirements document and prototype: assessment results guiding the key features that wCReF must have compiled in a software requirements document; from this step, prototyping was carried out; (D) wCReF usability analysis: a glimpse at the detection of new usability problems with end users by adapting the Ssemugabi satisfaction questionnaire; users’ evaluation had 80% positive feedback; (E) finally, some specific guidelines for interface design are presented, which may contribute to the design of interactive computational resources for the field of bioinformatics. In addition to the results of the original article, we present the methodology used in wCReF’s design and evaluation process (sample, procedures, evaluation tools) and the results obtained. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle Algorithms for Optimization of Processor and Memory Affinity for Remote Core Locking Synchronization in Multithreaded Applications
Information 2018, 9(1), 21; doi:10.3390/info9010021
Received: 11 December 2017 / Revised: 10 January 2018 / Accepted: 16 January 2018 / Published: 18 January 2018
PDF Full-text (8593 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes algorithms for optimization of the Remote Core Locking (RCL) synchronization method in multithreaded programs. We propose an algorithm for the initialization of RCL-locks and an algorithm for thread affinity optimization. The algorithms consider the structures of hierarchical computer systems and
[...] Read more.
This paper proposes algorithms for optimization of the Remote Core Locking (RCL) synchronization method in multithreaded programs. We propose an algorithm for the initialization of RCL-locks and an algorithm for thread affinity optimization. The algorithms consider the structures of hierarchical computer systems and non-uniform memory access (NUMA) in order to minimize the execution time of multithreaded programs with RCL. The experimental results on multi-core computer systems show the reduction of execution time for programs with RCL. Full article
Figures

Figure 1

Open AccessArticle Performance Study of Adaptive Video Streaming in an Interference Scenario of Femto-Macro Cell Networks
Information 2018, 9(1), 22; doi:10.3390/info9010022
Received: 27 November 2017 / Revised: 14 January 2018 / Accepted: 16 January 2018 / Published: 18 January 2018
PDF Full-text (3813 KB) | HTML Full-text | XML Full-text
Abstract
The demand for video traffic is increasing over mobile networks that are taking another shape by its heterogeneity. However, the wireless link capacity cannot cope with the traffic demand. This is due to the interference problem that can be considered as the most
[...] Read more.
The demand for video traffic is increasing over mobile networks that are taking another shape by its heterogeneity. However, the wireless link capacity cannot cope with the traffic demand. This is due to the interference problem that can be considered as the most important challenge in heterogeneous networks. Consequently, it will result in poor service for the quality of video streaming such as the bad quality delivery, service interruption, etc. In this paper, we propose a solution for interference mitigation in the context of heterogeneous networks through power control mechanism, while guaranteeing the Quality of Service of the video streaming. We derive a model for adapting the video bit rate to match the channel’s achievable bit rate. Our results demonstrate a high satisfaction for video streaming in terms of delay and throughput. Full article
(This article belongs to the Special Issue Selected Papers from WMNC 2017 and JITEL 2017)
Figures

Figure 1

Open AccessArticle Inspired from Ants Colony: Smart Routing Algorithm of Wireless Sensor Network
Information 2018, 9(1), 23; doi:10.3390/info9010023
Received: 14 December 2017 / Revised: 11 January 2018 / Accepted: 17 January 2018 / Published: 22 January 2018
PDF Full-text (3698 KB) | HTML Full-text | XML Full-text
Abstract
In brief, Wireless Sensor Networks (WSNs) are a set of limited power nodes, used for gathering the determined data of an area. Increasing the lifetime is the main challenge to optimize WSNs routing protocols, since the sensors’ energy in most cases is limited.
[...] Read more.
In brief, Wireless Sensor Networks (WSNs) are a set of limited power nodes, used for gathering the determined data of an area. Increasing the lifetime is the main challenge to optimize WSNs routing protocols, since the sensors’ energy in most cases is limited. In this respect, this article introduces a novel smart routing algorithm of wireless sensor networks, consisting of stable nodes randomly dispersed, and this approach is inspired from ant colonies. The proposed algorithm takes into consideration the distance between two nodes, the chosen path length and the nodes’ residual energy so as to update the choice probability of the next node among the neighbouring nodes, contrary to several routing algorithms; on the one hand, the nodes are aggregating data of their predecessors and sending to all to their successors; on the other hand, the source is almost always changed in each iteration. Consequently, the energy consumption is balanced between the nodes. Hence, the network lifetime will be increased. Detailed descriptions and a set of simulation using Matlab is provided to measure the network lifetime and the energy consumed by nodes of the proposed approach are presented. The replications’ consequences of simulations prove the success of our future routing algorithm (SRA). Full article
(This article belongs to the Special Issue Machine to Machine Communications and Internet of Things (IoT))
Figures

Figure 1

Open AccessArticle How Uncertain Information on Service Capacity Influences the Intermodal Routing Decision: A Fuzzy Programming Perspective
Information 2018, 9(1), 24; doi:10.3390/info9010024
Received: 18 December 2017 / Revised: 19 January 2018 / Accepted: 23 January 2018 / Published: 24 January 2018
Cited by 1 | PDF Full-text (4099 KB) | HTML Full-text | XML Full-text
Abstract
Capacity uncertainty is a common issue in the transportation planning field. However, few studies discuss the intermodal routing problem with service capacity uncertainty. Based on our previous study on the intermodal routing under deterministic capacity consideration, we systematically explore how service capacity uncertainty
[...] Read more.
Capacity uncertainty is a common issue in the transportation planning field. However, few studies discuss the intermodal routing problem with service capacity uncertainty. Based on our previous study on the intermodal routing under deterministic capacity consideration, we systematically explore how service capacity uncertainty influences the intermodal routing decision. First of all, we adopt trapezoidal fuzzy numbers to describe the uncertain information of the service capacity, and further transform the deterministic capacity constraint into a fuzzy chance constraint based on fuzzy credibility measure. We then integrate such fuzzy chance constraint into the mixed-integer linear programming (MILP) model proposed in our previous study to develop a fuzzy chance-constrained programming model. To enable the improved model to be effectively programmed in the standard mathematical programming software and solved by exact solution algorithms, a crisp equivalent linear reformulation of the fuzzy chance constraint is generated. Finally, we modify the empirical case presented in our previous study by replacing the deterministic service capacities with trapezoidal fuzzy ones. Using the modified empirical case, we utilize sensitivity analysis and fuzzy simulation to analyze the influence of service capacity uncertainty on the intermodal routing decision, and summarize some interesting insights that are helpful for decision makers. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Review

Jump to: Editorial, Research

Open AccessReview Smart Card Data Mining of Public Transport Destination: A Literature Review
Information 2018, 9(1), 18; doi:10.3390/info9010018
Received: 30 November 2017 / Revised: 5 January 2018 / Accepted: 10 January 2018 / Published: 13 January 2018
PDF Full-text (1009 KB) | HTML Full-text | XML Full-text
Abstract
Smart card data is increasingly used to investigate passenger behavior and the demand characteristics of public transport. The destination estimation of public transport is one of the major concerns for the implementation of smart card data. In recent years, numerous studies concerning destination
[...] Read more.
Smart card data is increasingly used to investigate passenger behavior and the demand characteristics of public transport. The destination estimation of public transport is one of the major concerns for the implementation of smart card data. In recent years, numerous studies concerning destination estimation have been carried out—most automatic fare collection (AFC) systems only record boarding information but not passenger alighting information. This study provides a comprehensive review of the practice of using smart card data for destination estimation. The results show that the land use factor is not discussed in more than three quarters of papers and sensitivity analysis is not applied in two thirds of papers. In addition, the results are not validated in half the relevant studies. In the future, more research should be done to improve the current model, such as considering additional factors or making sensitivity analysis of parameters as well as validating the results with multi-source data and new methods. Full article
Figures

Figure 1

Back to Top