Next Issue
Previous Issue

Table of Contents

Information, Volume 10, Issue 4 (April 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-34
Export citation of selected articles as:
Open AccessArticle MDS Self-Dual Codes and Antiorthogonal Matrices over Galois Rings
Information 2019, 10(4), 153; https://doi.org/10.3390/info10040153
Received: 21 March 2019 / Revised: 15 April 2019 / Accepted: 24 April 2019 / Published: 25 April 2019
PDF Full-text (273 KB) | HTML Full-text | XML Full-text
Abstract
In this study, we explore maximum distance separable (MDS) self-dual codes over Galois rings GR(pm,r) with p1(mod4) and odd r. Using the building-up construction, we construct MDS self-dual [...] Read more.
In this study, we explore maximum distance separable (MDS) self-dual codes over Galois rings G R ( p m , r ) with p 1 ( mod 4 ) and odd r. Using the building-up construction, we construct MDS self-dual codes of length four and eight over G R ( p m , 3 ) with ( p = 3 and m = 2 , 3 , 4 , 5 , 6 ), ( p = 7 and m = 2 , 3 ), ( p = 11 and m = 2 ), ( p = 19 and m = 2 ), ( p = 23 and m = 2 ), and ( p = 31 and m = 2 ). In the building-up construction, it is important to determine the existence of a square matrix U such that U U T = I , which is called an antiorthogonal matrix. We prove that there is no 2 × 2 antiorthogonal matrix over G R ( 2 m , r ) with m 2 and odd r. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle Wand-Like Interaction with a Hand-Held Tablet Device—A Study on Selection and Pose Manipulation Techniques
Information 2019, 10(4), 152; https://doi.org/10.3390/info10040152
Received: 28 February 2019 / Revised: 16 April 2019 / Accepted: 18 April 2019 / Published: 24 April 2019
Viewed by 93 | PDF Full-text (7712 KB) | HTML Full-text | XML Full-text
Abstract
Current hand-held smart devices are supplied with powerful processors, high resolution screens, and sharp cameras that make them suitable for Augmented Reality (AR) applications. Such applications commonly use interaction techniques adapted for touch, such as touch selection and multi-touch pose manipulation, mapping 2D [...] Read more.
Current hand-held smart devices are supplied with powerful processors, high resolution screens, and sharp cameras that make them suitable for Augmented Reality (AR) applications. Such applications commonly use interaction techniques adapted for touch, such as touch selection and multi-touch pose manipulation, mapping 2D gestures to 3D action. To enable direct 3D interaction for hand-held AR, an alternative is to use the changes of the device pose for 6 degrees-of-freedom interaction. In this article we explore selection and pose manipulation techniques that aim to minimize the amount of touch. For this, we explore and study the characteristics of both non-touch selection and non-touch pose manipulation techniques. We present two studies that, on the one hand, compare selection techniques with the common touch selection and, on the other, investigate the effect of user gaze control on the non-touch pose manipulation techniques. Full article
(This article belongs to the Special Issue Human-Centered 3D Interaction and User Interface)
Figures

Figure 1

Open AccessArticle A High Throughput Hardware Architecture for Parallel Recursive Systematic Convolutional Encoders
Information 2019, 10(4), 151; https://doi.org/10.3390/info10040151
Received: 30 March 2019 / Revised: 21 April 2019 / Accepted: 22 April 2019 / Published: 24 April 2019
Viewed by 107 | PDF Full-text (1006 KB) | HTML Full-text | XML Full-text
Abstract
During the last years, recursive systematic convolutional (RSC) encoders have found application in modern telecommunication systems to reduce the bit error rate (BER). In view of the necessity of increasing the throughput of such applications, several approaches using hardware implementations of RSC encoders [...] Read more.
During the last years, recursive systematic convolutional (RSC) encoders have found application in modern telecommunication systems to reduce the bit error rate (BER). In view of the necessity of increasing the throughput of such applications, several approaches using hardware implementations of RSC encoders were explored. In this paper, we propose a hardware intellectual property (IP) for high throughput RSC encoders. The IP core exploits a methodology based on the ABCD matrices model which permits to increase the number of inputs bits processed in parallel. Through an analysis of the proposed network topology and by exploiting data relative to the implementation on Zynq 7000 xc7z010clg400-1 field programmable gate array (FPGA), an estimation of the dependency of the input data rate and of the source occupation on the parallelism degree is performed. Such analysis, together with the BER curves, provides a description of the principal merit parameters of a RSC encoder. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Figures

Figure 1

Open AccessReview Text Classification Algorithms: A Survey
Information 2019, 10(4), 150; https://doi.org/10.3390/info10040150
Received: 22 March 2019 / Revised: 17 April 2019 / Accepted: 17 April 2019 / Published: 23 April 2019
Viewed by 405 | PDF Full-text (7541 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, there has been an exponential growth in the number of complex documents and texts that require a deeper understanding of machine learning methods to be able to accurately classify texts in many applications. Many machine learning approaches have achieved surpassing [...] Read more.
In recent years, there has been an exponential growth in the number of complex documents and texts that require a deeper understanding of machine learning methods to be able to accurately classify texts in many applications. Many machine learning approaches have achieved surpassing results in natural language processing. The success of these learning algorithms relies on their capacity to understand complex models and non-linear relationships within data. However, finding suitable structures, architectures, and techniques for text classification is a challenge for researchers. In this paper, a brief overview of text classification algorithms is discussed. This overview covers different text feature extractions, dimensionality reduction methods, existing algorithms and techniques, and evaluations methods. Finally, the limitations of each technique and their application in real-world problems are discussed. Full article
(This article belongs to the Special Issue Machine Learning on Scientific Data and Information)
Figures

Figure 1

Open AccessArticle A Collaborative Pilot Platform for Data Annotation and Enrichment in Viticulture
Information 2019, 10(4), 149; https://doi.org/10.3390/info10040149
Received: 1 March 2019 / Revised: 15 April 2019 / Accepted: 18 April 2019 / Published: 22 April 2019
Viewed by 150 | PDF Full-text (13070 KB) | HTML Full-text | XML Full-text
Abstract
It took some time indeed, but the research evolution and transformations that occurred in the smart agriculture field over the recent years tend to constitute the latter as the main topic of interest in the so-called Internet of Things (IoT) domain. Undoubtedly, our [...] Read more.
It took some time indeed, but the research evolution and transformations that occurred in the smart agriculture field over the recent years tend to constitute the latter as the main topic of interest in the so-called Internet of Things (IoT) domain. Undoubtedly, our era is characterized by the mass production of huge amounts of data, information and content deriving from many different sources, mostly IoT devices and sensors, but also from environmentalists, agronomists, winemakers, or plain farmers and interested stakeholders themselves. Being an emerging field, only a small part of this rich content has been aggregated so far in digital platforms that serve as cross-domain hubs. The latter offer typically limited usability and accessibility of the actual content itself due to problems dealing with insufficient data and metadata availability, as well as their quality. Over our recent involvement within a precision viticulture environment and in an effort to make the notion of smart agriculture in the winery domain more accessible to and reusable from the general public, we introduce herein the model of an aggregation platform that provides enhanced services and enables human-computer collaboration for agricultural data annotations and enrichment. In principle, the proposed architecture goes beyond existing digital content aggregation platforms by advancing digital data through the combination of artificial intelligence automation and creative user engagement, thus facilitating its accessibility, visibility, and re-use. In particular, by using image and free text analysis methodologies for automatic metadata enrichment, in accordance to the human expertise for enrichment, it offers a cornerstone for future researchers focusing on improving the quality of digital agricultural information analysis and its presentation, thus establishing new ways for its efficient exploitation in a larger scale with benefits both for the agricultural and the consumer domains. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle Survey and Classification of Automotive Security Attacks
Information 2019, 10(4), 148; https://doi.org/10.3390/info10040148
Received: 1 April 2019 / Accepted: 16 April 2019 / Published: 19 April 2019
Viewed by 184 | PDF Full-text (872 KB) | HTML Full-text | XML Full-text
Abstract
Due to current development trends in the automotive industry towards stronger connected and autonomous driving, the attack surface of vehicles is growing which increases the risk of security attacks. This has been confirmed by several research projects in which vehicles were attacked in [...] Read more.
Due to current development trends in the automotive industry towards stronger connected and autonomous driving, the attack surface of vehicles is growing which increases the risk of security attacks. This has been confirmed by several research projects in which vehicles were attacked in order to trigger various functions. In some cases these functions were critical to operational safety. To make automotive systems more secure, concepts must be developed that take existing attacks into account. Several taxonomies were proposed to analyze and classify security attacks. However, in this paper we show that the existing taxonomies were not designed for application in the automotive development process and therefore do not provide enough degree of detail for supporting development phases such as threat analysis or security testing. In order to be able to use the information that security attacks can provide for the development of security concepts and for testing automotive systems, we propose a comprehensive taxonomy with degrees of detail which addresses these tasks. In particular, our proposed taxonomy is designed in such a wa, that each step in the vehicle development process can leverage it. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle An Underground Radio Wave Propagation Prediction Model for Digital Agriculture
Information 2019, 10(4), 147; https://doi.org/10.3390/info10040147
Received: 20 March 2019 / Revised: 10 April 2019 / Accepted: 16 April 2019 / Published: 18 April 2019
Viewed by 240 | PDF Full-text (520 KB) | HTML Full-text | XML Full-text
Abstract
Underground sensing and propagation of Signals in the Soil (SitS) medium is an electromagnetic issue. The path loss prediction with higher accuracy is an open research subject in digital agriculture monitoring applications for sensing and communications. The statistical data are predominantly derived from [...] Read more.
Underground sensing and propagation of Signals in the Soil (SitS) medium is an electromagnetic issue. The path loss prediction with higher accuracy is an open research subject in digital agriculture monitoring applications for sensing and communications. The statistical data are predominantly derived from site-specific empirical measurements, which is considered an impediment to universal application. Nevertheless, in the existing literature, statistical approaches have been applied to the SitS channel modeling, where impulse response analysis and the Friis open space transmission formula are employed as the channel modeling tool in different soil types under varying soil moisture conditions at diverse communication distances and burial depths. In this article, an electromagnetic field analysis is presented as an enhanced monitoring approach for subsurface radio wave propagation and underground sensing applications in the field of digital agriculture. The signal strength results are shown for different distances and depths in the subsurface medium. The analysis shows that the lateral wave is the dominant wave in subsurface communications. Moreover, the shallow depths are more suitable for soil moisture sensing and long-range underground communications. The developed paradigm leads to advanced system design for real-time soil monitoring applications. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle MFA-OSELM Algorithm for WiFi-Based Indoor Positioning System
Information 2019, 10(4), 146; https://doi.org/10.3390/info10040146
Received: 27 February 2019 / Revised: 14 April 2019 / Accepted: 16 April 2019 / Published: 18 April 2019
Viewed by 188 | PDF Full-text (5896 KB) | HTML Full-text | XML Full-text
Abstract
Indoor localization is a dynamic and exciting research area. WiFi has exhibited a tremendous capability for internal localization since it is extensively used and easily accessible. Facilitating the use of WiFi for this purpose requires fingerprint formation and the implementation of a learning [...] Read more.
Indoor localization is a dynamic and exciting research area. WiFi has exhibited a tremendous capability for internal localization since it is extensively used and easily accessible. Facilitating the use of WiFi for this purpose requires fingerprint formation and the implementation of a learning algorithm with the aim of using the fingerprint to determine locations. The most difficult aspect of techniques based on fingerprints is the effect of dynamic environmental changes on fingerprint authentication. With the aim of dealing with this problem, many experts have adopted transfer-learning methods, even though in WiFi indoor localization the dynamic quality of the change in the fingerprint has some cyclic factors that necessitate the use of previous knowledge in various situations. Thus, this paper presents the maximum feature adaptive online sequential extreme learning machine (MFA-OSELM) technique, which uses previous knowledge to handle the cyclic dynamic factors that are brought about by the issue of mobility, which is present in internal environments. This research extends the earlier study of the feature adaptive online sequential extreme learning machine (FA-OSELM). The results of this research demonstrate that MFA-OSELM is superior to FA-OSELM given its capacity to preserve previous data when a person goes back to locations that he/she had visited earlier. Also, there is always a positive accuracy change when using MFA-OSELM, with the best change achieved being 27% (ranging from eight to 27% and six to 18% for the TampereU and UJIIndoorLoc datasets, respectively), which proves the efficiency of MFA-OSELM in restoring previous knowledge. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle A Template Generation and Improvement Approach for Finger-Vein Recognition
Information 2019, 10(4), 145; https://doi.org/10.3390/info10040145
Received: 23 February 2019 / Revised: 12 April 2019 / Accepted: 15 April 2019 / Published: 18 April 2019
Viewed by 161 | PDF Full-text (1266 KB) | HTML Full-text | XML Full-text
Abstract
Finger-vein biometrics have been extensively investigated for person verification. One of the open issues in finger-vein verification is the lack of robustness against variations of vein patterns due to the changes in physiological and imaging conditions during the acquisition process, which results in [...] Read more.
Finger-vein biometrics have been extensively investigated for person verification. One of the open issues in finger-vein verification is the lack of robustness against variations of vein patterns due to the changes in physiological and imaging conditions during the acquisition process, which results in large intra-class variations among the finger-vein images captured from the same finger and may degrade the system performance. Despite recent advances in biometric template generation and improvement, current solutions mainly focus on the extrinsic biometrics (e.g., fingerprints, face, signature) instead of intrinsic biometrics (e.g., vein). This paper proposes a weighted least square regression based model to generate and improve enrollment template for finger-vein verification. Driven by the primary target of biometric template generation and improvement, i.e., verification error minimization, we assume that a good template has the smallest intra-class distance with respect to the images from the same class in a verification system. Based on this assumption, the finger-vein template generation is converted into an optimization problem. To improve the performance, the weights associated with similarity are computed for template generation. Then, the enrollment template is generated by solving the optimization problem. Subsequently, a template improvement model is proposed to gradually update vein features in the template. To the best of our knowledge, this is the first proposed work of template generation and improvement for finger-vein biometrics. The experimental results on two public finger-vein databases show that the proposed schemes minimize the intra-class variations among samples and significantly improve finger-vein recognition accuracy. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle Double Deep Autoencoder for Heterogeneous Distributed Clustering
Information 2019, 10(4), 144; https://doi.org/10.3390/info10040144
Received: 4 March 2019 / Revised: 15 April 2019 / Accepted: 15 April 2019 / Published: 17 April 2019
Viewed by 202 | PDF Full-text (1512 KB) | HTML Full-text | XML Full-text
Abstract
Given the issues relating to big data and privacy-preserving challenges, distributed data mining (DDM) has received much attention recently. Here, we focus on the clustering problem of distributed environments. Several distributed clustering algorithms have been proposed to solve this problem, however, previous studies [...] Read more.
Given the issues relating to big data and privacy-preserving challenges, distributed data mining (DDM) has received much attention recently. Here, we focus on the clustering problem of distributed environments. Several distributed clustering algorithms have been proposed to solve this problem, however, previous studies have mainly considered homogeneous data. In this paper, we develop a double deep autoencoder structure for clustering in distributed and heterogeneous datasets. Three datasets are used to demonstrate the proposed algorithms, and show their usefulness according to the consistent accuracy index. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle User Education in Automated Driving: Owner’s Manual and Interactive Tutorial Support Mental Model Formation and Human-Automation Interaction
Information 2019, 10(4), 143; https://doi.org/10.3390/info10040143
Received: 22 March 2019 / Revised: 7 April 2019 / Accepted: 15 April 2019 / Published: 17 April 2019
Viewed by 243 | PDF Full-text (2422 KB) | HTML Full-text | XML Full-text
Abstract
Automated driving systems (ADS) and a combination of these with advanced driver assistance systems (ADAS) will soon be available to a large consumer population. Apart from testing automated driving features and human–machine interfaces (HMI), the development and evaluation of training for interacting with [...] Read more.
Automated driving systems (ADS) and a combination of these with advanced driver assistance systems (ADAS) will soon be available to a large consumer population. Apart from testing automated driving features and human–machine interfaces (HMI), the development and evaluation of training for interacting with driving automation has been largely neglected. The present work outlines the conceptual development of two possible approaches of user education which are the owner’s manual and an interactive tutorial. These approaches are investigated by comparing them to a baseline consisting of generic information about the system function. Using a between-subjects design, N = 24 participants complete one training prior to interacting with the ADS HMI in a driving simulator. Results show that both the owner’s manual and an interactive tutorial led to an increased understanding of driving automation systems as well as an increased interaction performance. This work contributes to method development for the evaluation of ADS by proposing two alternative approaches of user education and their implications for both application in realistic settings and HMI testing. Full article
(This article belongs to the Special Issue Automotive User Interfaces and Interactions in Automated Driving)
Figures

Figure 1

Open AccessArticle A Synergetic Theory of Information
Information 2019, 10(4), 142; https://doi.org/10.3390/info10040142
Received: 25 March 2019 / Revised: 30 March 2019 / Accepted: 11 April 2019 / Published: 16 April 2019
Viewed by 255 | PDF Full-text (1925 KB) | HTML Full-text | XML Full-text
Abstract
A new approach is presented to defining the amount of information, in which information is understood as the data about a finite set as a whole, whereas the average length of an integrative code of elements serves as a measure of information. In [...] Read more.
A new approach is presented to defining the amount of information, in which information is understood as the data about a finite set as a whole, whereas the average length of an integrative code of elements serves as a measure of information. In the framework of this approach, the formula for the syntropy of a reflection was obtained for the first time, that is, the information which two intersecting finite sets reflect (reproduce) about each other. Features of a reflection of discrete systems through a set of their parts are considered and it is shown that reproducible information about the system (the additive syntropy of reflection) and non-reproducible information (the entropy of reflection) are, respectively, measures of the structural order and the chaos. At that, the general classification of discrete systems is given by the ratio of the order and the chaos. Three information laws have been established: The law of conservation of the sum of chaos and order; the information law of reflection; and the law of conservation and transformation of information. An assessment of the structural organization and the level of development of discrete systems is presented. It is shown that various measures of information are structural characteristics of integrative codes of elements of discrete systems. A conclusion is made that, from the information-genetic positions, the synergetic approach to the definition of the quantity of information is primary in relation to the approaches of Hartley and Shannon. Full article
(This article belongs to the Section Information Theory and Methodology)
Figures

Figure 1

Open AccessReview A Review of Polyglot Persistence in the Big Data World
Information 2019, 10(4), 141; https://doi.org/10.3390/info10040141
Received: 10 March 2019 / Revised: 31 March 2019 / Accepted: 4 April 2019 / Published: 16 April 2019
Viewed by 210 | PDF Full-text (2770 KB) | HTML Full-text | XML Full-text
Abstract
The inevitability of the relationship between big data and distributed systems is indicated by the fact that data characteristics cannot be easily handled by a standalone centric approach. Among the different concepts of distributed systems, the CAP theorem (Consistency, Availability, and Partition Tolerant) [...] Read more.
The inevitability of the relationship between big data and distributed systems is indicated by the fact that data characteristics cannot be easily handled by a standalone centric approach. Among the different concepts of distributed systems, the CAP theorem (Consistency, Availability, and Partition Tolerant) points out the prominent use of the eventual consistency property in distributed systems. This has prompted the need for other, different types of databases beyond SQL (Structured Query Language) that have properties of scalability and availability. NoSQL (Not-Only SQL) databases, mostly with the BASE (Basically Available, Soft State, and Eventual consistency), are gaining ground in the big data era, while SQL databases are left trying to keep up with this paradigm shift. However, none of these databases are perfect, as there is no model that fits all requirements of data-intensive systems. Polyglot persistence, i.e., using different databases as appropriate for the different components within a single system, is becoming prevalent in data-intensive big data systems, as they are distributed and parallel by nature. This paper reflects the characteristics of these databases from a conceptual point of view and describes a potential solution for a distributed system—the adoption of polyglot persistence in data-intensive systems in the big data era. Full article
(This article belongs to the Section Review)
Figures

Figure 1

Open AccessArticle Quadratic Frequency Modulation Signals Parameter Estimation Based on Product High Order Ambiguity Function-Modified Integrated Cubic Phase Function
Information 2019, 10(4), 140; https://doi.org/10.3390/info10040140
Received: 20 February 2019 / Revised: 9 April 2019 / Accepted: 12 April 2019 / Published: 16 April 2019
Viewed by 192 | PDF Full-text (301 KB) | HTML Full-text | XML Full-text
Abstract
In inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, the multi-component quadratic frequency modulation (QFM) signals are more suitable model for azimuth echo signals. The quadratic chirp rate [...] Read more.
In inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, such as ships fluctuating with oceanic waves and high maneuvering airplanes, the multi-component quadratic frequency modulation (QFM) signals are more suitable model for azimuth echo signals. The quadratic chirp rate (QCR) and chirp rate (CR) cause the ISAR imaging defocus. Thus, it is important to estimate QCR and CR of multi-component QFM signals in ISAR imaging system. The conventional QFM signal parameter estimation algorithms suffer from the cross-term problem. To solve this problem, this paper proposes the product high order ambiguity function-modified integrated cubic phase function (PHAF-MICPF). The PHAF-MICPF employs phase differentiation operation with multi-scale factors and modified coherently integrated cubic phase function (MICPF) to transform the multi-component QFM signals into the time-quadratic chirp rate (T-QCR) domains. The cross-term suppression ability of the PHAF-MICPF is improved by multiplying different T-QCR domains that are related to different scale factors. Besides, the multiplication operation can improve the anti-noise performance and solve the identifiability problem. Compared with high order ambiguity function-integrated cubic phase function (HAF-ICPF), the simulation results verify that the PHAF-MICPF acquires better cross-term suppression ability, better anti-noise performance and solves the identifiability problem. Full article
Figures

Figure 1

Open AccessArticle Learning Subword Embedding to Improve Uyghur Named-Entity Recognition
Information 2019, 10(4), 139; https://doi.org/10.3390/info10040139
Received: 27 March 2019 / Revised: 9 April 2019 / Accepted: 11 April 2019 / Published: 15 April 2019
Viewed by 216 | PDF Full-text (875 KB) | HTML Full-text | XML Full-text
Abstract
Uyghur is a morphologically rich and typical agglutinating language, and morphological segmentation affects the performance of Uyghur named-entity recognition (NER). Common Uyghur NER systems use the word sequence as input and rely heavily on feature engineering. However, semantic information cannot be fully learned [...] Read more.
Uyghur is a morphologically rich and typical agglutinating language, and morphological segmentation affects the performance of Uyghur named-entity recognition (NER). Common Uyghur NER systems use the word sequence as input and rely heavily on feature engineering. However, semantic information cannot be fully learned and will easily suffer from data sparsity arising from morphological processes when only the word sequence is considered. To solve this problem, we provide a neural network architecture employing subword embedding with character embedding based on a bidirectional long short-term memory network with a conditional random field layer. Our experiments show that subword embedding can effectively enhance the performance of the Uyghur NER, and the proposed method outperforms the model-based word sequence method. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle Dynamic Evolution Model of a Collaborative Innovation Network from the Resource Perspective and an Application Considering Different Government Behaviors
Information 2019, 10(4), 138; https://doi.org/10.3390/info10040138
Received: 22 February 2019 / Revised: 6 April 2019 / Accepted: 11 April 2019 / Published: 12 April 2019
Viewed by 264 | PDF Full-text (2912 KB) | HTML Full-text | XML Full-text
Abstract
The evolution of a collaborative innovation network depends on the interrelationships among the innovation subjects. Every single small change affects the network topology, which leads to different evolution results. A logical relationship exists between network evolution and innovative behaviors. An accurate understanding of [...] Read more.
The evolution of a collaborative innovation network depends on the interrelationships among the innovation subjects. Every single small change affects the network topology, which leads to different evolution results. A logical relationship exists between network evolution and innovative behaviors. An accurate understanding of the characteristics of the network structure can help the innovative subjects to adopt appropriate innovative behaviors. This paper summarizes the three characteristics of collaborative innovation networks, knowledge transfer, policy environment, and periodic cooperation, and it establishes a dynamic evolution model for a resource-priority connection mechanism based on innovation resource theory. The network subjects are not randomly testing all of the potential partners, but have a strong tendency to, which is, innovation resource. The evolution process of a collaborative innovation network is simulated with three different government behaviors as experimental objects. The evolution results show that the government should adopt the policy of supporting the enterprises that recently entered the network, which can maintain the innovation vitality of the network and benefit the innovation output. The results of this study also provide a reference for decision-making by the government and enterprises. Full article
(This article belongs to the Special Issue Computational Social Science)
Figures

Figure 1

Open AccessArticle Data Consistency Theory and Case Study for Scientific Big Data
Information 2019, 10(4), 137; https://doi.org/10.3390/info10040137
Received: 21 March 2019 / Revised: 3 April 2019 / Accepted: 8 April 2019 / Published: 12 April 2019
Viewed by 262 | PDF Full-text (623 KB) | HTML Full-text | XML Full-text
Abstract
Big data technique is a series of novel technologies to deal with large amounts of data from various sources. Unfortunately, it is inevitable that the data from different sources conflict with each other from the aspects of format, semantics, and value. To solve [...] Read more.
Big data technique is a series of novel technologies to deal with large amounts of data from various sources. Unfortunately, it is inevitable that the data from different sources conflict with each other from the aspects of format, semantics, and value. To solve the problem of conflicts, the paper proposes data consistency theory for scientific big data, including the basic concepts, properties, and quantitative evaluation method. Data consistency can be divided into different grades as complete consistency, strong consistency, weak consistency, and conditional consistency according to consistency degree and application demand. The case study is executed on material creep testing data. The analysis results show that the theory can solve the problem of conflicts in scientific big data. Full article
(This article belongs to the Special Issue Big Data Analytics and Computational Intelligence)
Figures

Graphical abstract

Open AccessArticle Improved Massive MIMO RZF Precoding Algorithm Based on Truncated Kapteyn Series Expansion
Information 2019, 10(4), 136; https://doi.org/10.3390/info10040136
Received: 10 March 2019 / Revised: 28 March 2019 / Accepted: 4 April 2019 / Published: 11 April 2019
Viewed by 223 | PDF Full-text (322 KB) | HTML Full-text | XML Full-text
Abstract
In order to reduce the computational complexity of the inverse matrix in the regularized zero-forcing (RZF) precoding algorithm, this paper expands and approximates the inverse matrix based on the truncated Kapteyn series expansion and the corresponding low-complexity RZF precoding algorithm is obtained. In [...] Read more.
In order to reduce the computational complexity of the inverse matrix in the regularized zero-forcing (RZF) precoding algorithm, this paper expands and approximates the inverse matrix based on the truncated Kapteyn series expansion and the corresponding low-complexity RZF precoding algorithm is obtained. In addition, the expansion coefficients of the truncated Kapteyn series in our proposed algorithm are optimized, leading to further improvement of the convergence speed of the precoding algorithm under the premise of the same computational complexity as the traditional RZF precoding. Moreover, the computational complexity and the downlink channel performance in terms of the average achievable rate of the proposed RZF precoding algorithm and other RZF precoding algorithms with typical truncated series expansion approaches are analyzed, and further evaluated by numerical simulations in a large-scale single-cell multiple-input-multiple-output (MIMO) system. Simulation results show that the proposed improved RZF precoding algorithm based on the truncated Kapteyn series expansion performs better than other compared algorithms while keeping low computational complexity. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle End to End Delay and Energy Consumption in a Two Tier Cluster Hierarchical Wireless Sensor Networks
Information 2019, 10(4), 135; https://doi.org/10.3390/info10040135
Received: 22 January 2019 / Revised: 7 March 2019 / Accepted: 1 April 2019 / Published: 10 April 2019
Viewed by 269 | PDF Full-text (454 KB) | HTML Full-text | XML Full-text
Abstract
In this work it is considered a circular Wireless Sensor Networks (WSN) in a planar structure with uniform distribution of the sensors and with a two-level hierarchical topology. At the lower level, a cluster configuration is adopted in which the sensed information is [...] Read more.
In this work it is considered a circular Wireless Sensor Networks (WSN) in a planar structure with uniform distribution of the sensors and with a two-level hierarchical topology. At the lower level, a cluster configuration is adopted in which the sensed information is transferred from sensor nodes to a cluster head (CH) using a random access protocol (RAP). At CH level, CHs transfer information, hop-by-hop, ring-by-ring, towards to the sink located at the center of the sensed area using TDMA as MAC protocol. A Markovian model to evaluate the end-to-end (E2E) transfer delay is formulated. In addition to other results such as the well know energy hole problem, the model reveals that for a given radial distance between the CH and the sink, the transfer delay depends on the angular orientation between them. For instance, when two rings of CHs are deployed in the WSN area, the E2E delay of data packets generated at ring 2 and at the “west” side of the sink, is 20% higher than the corresponding E2E delay of data packets generated at ring 2 and at the “east” side of the sink. This asymmetry can be alleviated by rotating from time to time the allocation of temporary slots to CHs in the TDMA communication. Also, the energy consumption is evaluated and the numerical results show that for a WSN with a small coverage area, say a radio of 100 m, the energy saving is more significant when a small number of rings are deployed, perhaps none (a single cluster in which the sink acts as a CH). Conversely, topologies with a large number of rings, say 4 or 5, offer a better energy performance when the service WSN covers a large area, say radial distances greater than 400 m. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle Predict Electric Power Demand with Extended Goal Graph and Heterogeneous Mixture Modeling
Information 2019, 10(4), 134; https://doi.org/10.3390/info10040134
Received: 29 January 2019 / Revised: 4 April 2019 / Accepted: 8 April 2019 / Published: 10 April 2019
Viewed by 266 | PDF Full-text (3619 KB) | HTML Full-text | XML Full-text
Abstract
In this study, methods for predicting energy demand on hourly consumption data are established for realizing an energy management system for buildings. The methods consist of an energy prediction algorithm that automatically separates the datasets to partitions (gate) and creates a linear regression [...] Read more.
In this study, methods for predicting energy demand on hourly consumption data are established for realizing an energy management system for buildings. The methods consist of an energy prediction algorithm that automatically separates the datasets to partitions (gate) and creates a linear regression model (local expert) for each partition on the heterogeneous mixture modeling, and an extended goal graph that extracts candidates of variables both for data partitioning and for linear regression for the energy prediction algorithm. These methods were implemented as tools and applied to create the energy prediction model on two years' hourly consumption data for a building. We validated the methods by comparing accuracies with those of different machine learning algorithms applied to the same datasets. Full article
(This article belongs to the Special Issue MoDAT: Designing the Market of Data)
Figures

Figure 1

Open AccessArticle Assessing Lisbon Trees’ Carbon Storage Quantity, Density, and Value Using Open Data and Allometric Equations
Information 2019, 10(4), 133; https://doi.org/10.3390/info10040133
Received: 22 February 2019 / Revised: 2 April 2019 / Accepted: 8 April 2019 / Published: 10 April 2019
Viewed by 224 | PDF Full-text (3288 KB) | HTML Full-text | XML Full-text
Abstract
Urban population has grown exponentially in recent years, leading to an increase of CO2 emissions and consequently contributing on a large scale to climate change. Urban trees are fundamental to mitigating CO2 emissions as they incorporate carbon in their biomass. It [...] Read more.
Urban population has grown exponentially in recent years, leading to an increase of CO2 emissions and consequently contributing on a large scale to climate change. Urban trees are fundamental to mitigating CO2 emissions as they incorporate carbon in their biomass. It becomes necessary to understand and measure urban tree carbon storage. In this paper is studied the potential of open data to measure the quantity, density, and value of carbon stored by the seven most represented urban trees in the city of Lisbon. To compute carbon storage, the seven most represented urban tree species were selected from an open database acquired from an open data portal of the city of Lisbon. Through allometric equations, it was possible to compute the trees’ biomass and calculate carbon storage quantity, density, and value. The results showed that the tree species Celtis australis is the species that contributes more to carbon storage. Central parishes of the city of Lisbon present higher-density values of carbon storage when compared with the border parishes despite the first ones presenting low-to-medium values of carbon storage quantity and value. Trees located in streets, present higher values of carbon storage, when compared with trees located in schools and green areas. Finally, the potential usage of this information to build a decision-support dashboard for planning green infrastructures was demonstrated. Full article
Figures

Figure 1

Open AccessArticle Game Analysis of Access Control Based on User Behavior Trust
Information 2019, 10(4), 132; https://doi.org/10.3390/info10040132
Received: 13 February 2019 / Revised: 2 April 2019 / Accepted: 3 April 2019 / Published: 9 April 2019
Viewed by 254 | PDF Full-text (1402 KB) | HTML Full-text | XML Full-text
Abstract
Due to the dynamics and uncertainty of the current network environment, access control is one of the most important factors in guaranteeing network information security. How to construct a scientific and accurate access control model is a current research focus. In actual access [...] Read more.
Due to the dynamics and uncertainty of the current network environment, access control is one of the most important factors in guaranteeing network information security. How to construct a scientific and accurate access control model is a current research focus. In actual access control mechanisms, users with high trust values bring better benefits, but the losses will also be greater once cheating access is adopted. A general access control game model that can reflect both trust and risk is established in this paper. First, we construct an access control game model with user behavior trust between the user and the service provider, in which the benefits and losses are quantified by using adaptive regulatory factors and the user’s trust level, which enhances the rationality of the policy making. Meanwhile, we present two kinds of solutions for the prisoner’s dilemma in the traditional access control game model without user behavior trust. Then, due to the vulnerability of trust, the user’s trust value is updated according to the interaction situation in the previous stage, which ensures that the updating of the user’s trust value can satisfy the “slow rising-fast falling” principle. Theoretical analysis and the simulation experiment both show that this model has a better performance than a traditional game model and can guarantee scientific decision-making in the access control mechanism. Full article
(This article belongs to the Special Issue The Security and Digital Forensics of Cloud Computing)
Figures

Figure 1

Open AccessArticle Es-Tacotron2: Multi-Task Tacotron 2 with Pre-Trained Estimated Network for Reducing the Over-Smoothness Problem
Information 2019, 10(4), 131; https://doi.org/10.3390/info10040131
Received: 27 January 2019 / Revised: 6 April 2019 / Accepted: 8 April 2019 / Published: 9 April 2019
Viewed by 263 | PDF Full-text (12431 KB) | HTML Full-text | XML Full-text
Abstract
Text-to-speech synthesis is a computational technique for producing synthetic, human-like speech by a computer. In recent years, speech synthesis techniques have developed, and have been employed in many applications, such as automatic translation applications and car navigation systems. End-to-end text-to-speech synthesis has gained [...] Read more.
Text-to-speech synthesis is a computational technique for producing synthetic, human-like speech by a computer. In recent years, speech synthesis techniques have developed, and have been employed in many applications, such as automatic translation applications and car navigation systems. End-to-end text-to-speech synthesis has gained considerable research interest, because compared to traditional models the end-to-end model is easier to design and more robust. Tacotron 2 is an integrated state-of-the-art end-to-end speech synthesis system that can directly predict closed-to-natural human speech from raw text. However, there remains a gap between synthesized speech and natural speech. Suffering from an over-smoothness problem, Tacotron 2 produced ’averaged’ speech, making the synthesized speech sounds unnatural and inflexible. In this work, we first propose an estimated network (Es-Network), which captures general features from a raw mel spectrogram in an unsupervised manner. Then, we design Es-Tacotron2 by employing the Es-Network to calculate the estimated mel spectrogram residual, and setting it as an additional prediction task of Tacotron 2, to allow the model focus more on predicting the individual features of mel spectrogram. The experience shows that compared to the original Tacotron 2 model, Es-Tacotron2 can produce more variable decoder output and synthesize more natural and expressive speech. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Graphical abstract

Open AccessArticle Combined Recommendation Algorithm Based on Improved Similarity and Forgetting Curve
Information 2019, 10(4), 130; https://doi.org/10.3390/info10040130
Received: 17 December 2018 / Revised: 8 March 2019 / Accepted: 3 April 2019 / Published: 8 April 2019
Viewed by 283 | PDF Full-text (7116 KB) | HTML Full-text | XML Full-text
Abstract
The recommendation algorithm in e-commerce systems is faced with the problem of high sparsity of users’ score data and interest’s shift, which greatly affects the performance of recommendation. Hence, a combined recommendation algorithm based on improved similarity and forgetting curve is proposed. Firstly, [...] Read more.
The recommendation algorithm in e-commerce systems is faced with the problem of high sparsity of users’ score data and interest’s shift, which greatly affects the performance of recommendation. Hence, a combined recommendation algorithm based on improved similarity and forgetting curve is proposed. Firstly, the Pearson similarity is improved by a wide range of weighted factors to enhance the quality of Pearson similarity for high sparse data. Secondly, the Ebbinghaus forgetting curve is introduced to track a user’s interest shift. User score is weighted according to the residual memory of forgetting function. Users’ interest changing with time is tracked by scoring, which increases both accuracy of recommendation algorithm and users’ satisfaction. The two algorithms are then combined together. Finally, the MovieLens dataset is employed to evaluate different algorithms and results show that the proposed algorithm decreases mean absolute error (MAE) by 12.2%, average coverage 1.41%, and increases average precision by 10.52%, respectively. Full article
(This article belongs to the Special Issue Modern Recommender Systems: Approaches, Challenges and Applications)
Figures

Figure 1

Open AccessArticle Deep Image Similarity Measurement Based on the Improved Triplet Network with Spatial Pyramid Pooling
Information 2019, 10(4), 129; https://doi.org/10.3390/info10040129
Received: 15 March 2019 / Revised: 2 April 2019 / Accepted: 3 April 2019 / Published: 8 April 2019
Viewed by 293 | PDF Full-text (4780 KB) | HTML Full-text | XML Full-text
Abstract
Image similarity measurement is a fundamental problem in the field of computer vision. It is widely used in image classification, object detection, image retrieval, and other fields, mostly through Siamese or triplet networks. These networks consist of two or three identical branches of [...] Read more.
Image similarity measurement is a fundamental problem in the field of computer vision. It is widely used in image classification, object detection, image retrieval, and other fields, mostly through Siamese or triplet networks. These networks consist of two or three identical branches of convolutional neural network (CNN) and share their weights to obtain the high-level image feature representations so that similar images are mapped close to each other in the feature space, and dissimilar image pairs are mapped far from each other. Especially, the triplet network is known as the state-of-the-art method on image similarity measurement. However, the basic CNN can only handle fixed-size images. If we obtain a fixed size image via cutting or scaling, the information of the image will be lost and the recognition accuracy will be reduced. To solve the problem, this paper has proposed the triplet spatial pyramid pooling network (TSPP-Net) through combing the triplet convolution neural network with the spatial pyramid pooling. Additionally, we propose an improved triplet loss function, so that the network model can realize twice distance learning by only inputting three samples at one time. Through the theoretical analysis and experiments, it is proved that the TSPP-Net model and the improved triple loss function can improve the generalization ability and the accuracy of image similarity measurement algorithm. Full article
Figures

Figure 1

Open AccessArticle A Privacy-Preserving Protocol for Utility-Based Routing in DTNs
Information 2019, 10(4), 128; https://doi.org/10.3390/info10040128
Received: 12 February 2019 / Revised: 19 March 2019 / Accepted: 31 March 2019 / Published: 8 April 2019
Viewed by 254 | PDF Full-text (2762 KB) | HTML Full-text | XML Full-text
Abstract
In the utility-based routing protocol of delay-tolerant networks (DTNs), nodes calculate routing utility value by encounter time, frequency, and so on, and then forward messages according to the utility. The privacy information of encounter time and frequency will be leaked when nodes communicate [...] Read more.
In the utility-based routing protocol of delay-tolerant networks (DTNs), nodes calculate routing utility value by encounter time, frequency, and so on, and then forward messages according to the utility. The privacy information of encounter time and frequency will be leaked when nodes communicate with real IDs. Node ID anonymity can protect the privacy information, but it also prevents nodes from collecting encounter information to calculate the real utility value. To solve the above problem, this paper proposes a privacy-preserving protocol for utility-based routing (PPUR) in DTNs. When node encounter occurs in PPUR, they anonymously generate and collect the encounter record information by pseudo-IDs. Then, nodes forward the information to a trusted authority (TA), which calculates the routing utility value and returns it to the nodes, so that nodes can protect the privacy information and obtain the real utility value at the same time. PPUR also protects the confidentiality and integrity of messages through hashing and digital signature. The experimental results show that PPUR can not only protect nodes’ privacy information, but also effectively forward messages with real utility value. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Learning Improved Semantic Representations with Tree-Structured LSTM for Hashtag Recommendation: An Experimental Study
Information 2019, 10(4), 127; https://doi.org/10.3390/info10040127
Received: 20 January 2019 / Revised: 24 March 2019 / Accepted: 3 April 2019 / Published: 6 April 2019
Viewed by 359 | PDF Full-text (1073 KB) | HTML Full-text | XML Full-text
Abstract
A hashtag is a type of metadata tag used on social networks, such as Twitter and other microblogging services. Hashtags indicate the core idea of a microblog post and can help people to search for specific themes or content. However, not everyone tags [...] Read more.
A hashtag is a type of metadata tag used on social networks, such as Twitter and other microblogging services. Hashtags indicate the core idea of a microblog post and can help people to search for specific themes or content. However, not everyone tags their posts themselves. Therefore, the task of hashtag recommendation has received significant attention in recent years. To solve the task, a key problem is how to effectively represent the text of a microblog post in a way that its representation can be utilized for hashtag recommendation. We study two major kinds of text representation methods for hashtag recommendation, including shallow textual features and deep textual features learned by deep neural models. Most existing work tries to use deep neural networks to learn microblog post representation based on the semantic combination of words. In this paper, we propose to adopt Tree-LSTM to improve the representation by combining the syntactic structure and the semantic information of words. We conduct extensive experiments on two real world datasets. The experimental results show that deep neural models generally perform better than traditional methods. Specially, Tree-LSTM achieves significantly better results on hashtag recommendation than standard LSTM, with a 30% increase in F1-score, which indicates that it is promising to utilize syntactic structure in the task of hashtag recommendation. Full article
Figures

Figure 1

Open AccessArticle The Design and Application of Game Rewards in Youth Addiction Care
Information 2019, 10(4), 126; https://doi.org/10.3390/info10040126
Received: 17 January 2019 / Revised: 22 March 2019 / Accepted: 2 April 2019 / Published: 6 April 2019
Viewed by 326 | PDF Full-text (695 KB) | HTML Full-text | XML Full-text
Abstract
Different types of rewards are applied in persuasive games to encourage play persistence of its users and facilitate the achievement of desired real-world goals, such as behavioral change. Persuasive games have successfully been applied in mental healthcare and may hold potential for different [...] Read more.
Different types of rewards are applied in persuasive games to encourage play persistence of its users and facilitate the achievement of desired real-world goals, such as behavioral change. Persuasive games have successfully been applied in mental healthcare and may hold potential for different types of patients. However, we question to what extent game-based rewards are suitable in a persuasive game design for a substance dependence therapy context, as people with substance-related disorders show decreased sensitivity to natural rewards, which may result in different responses to commonly applied game rewards compared to people without substance use disorders. In a within-subject experiment with 20 substance dependent and 25 non-dependent participants, we examined whether play persistence and reward evaluation differed between the two groups. Results showed that in contrast to our expectations, substance dependent participants were more motivated by the types of rewards compared to non-substance dependent participants. Participants evaluated monetary rewards more positively than playing for virtual points or social rewards. We conclude this paper with design implications of game-based rewards in persuasive games for mental healthcare. Full article
(This article belongs to the Special Issue Serious Games and Applications for Health (SeGAH 2018))
Figures

Figure 1

Open AccessArticle An Improved Threshold-Sensitive Stable Election Routing Energy Protocol for Heterogeneous Wireless Sensor Networks
Information 2019, 10(4), 125; https://doi.org/10.3390/info10040125
Received: 16 February 2019 / Revised: 22 March 2019 / Accepted: 1 April 2019 / Published: 5 April 2019
Viewed by 362 | PDF Full-text (1204 KB) | HTML Full-text | XML Full-text
Abstract
In the Threshold-Sensitive Stable Election Protocol, sensors are randomly deployed in the region without considering the balanced energy consumption of nodes. If a node that has been selected as a cluster head is located far away from the base station, it will affect [...] Read more.
In the Threshold-Sensitive Stable Election Protocol, sensors are randomly deployed in the region without considering the balanced energy consumption of nodes. If a node that has been selected as a cluster head is located far away from the base station, it will affect the efficiency of the network due to its early death. This paper proposes an improved energy efficient routing protocol named Improved Threshold-Sensitive Stable Election protocol (ITSEP) for heterogeneous wireless sensor networks. Firstly, we use a node state transformation mechanism to control the number of cluster heads in high-density node areas. Secondly, the proposed protocol improves the threshold formula by considering the distance from the node to the base station, the number of neighbor nodes, its residual energy, and the average distance between nodes. In addition, an optimal route with minimum energy consumption for cluster heads has been selected throughout data transmission. Simulation results show that this algorithm has achieved a longer lifetime than the stable election protocol algorithm, modified stable election protocol algorithm, and threshold-sensitive stable election protocol algorithm for the heterogeneous wireless sensor network. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle A High-Resolution Joint Angle-Doppler Estimation Sub-Nyquist Radar Approach Based on Matrix Completion
Information 2019, 10(4), 124; https://doi.org/10.3390/info10040124
Received: 8 January 2019 / Revised: 29 March 2019 / Accepted: 2 April 2019 / Published: 4 April 2019
Viewed by 301 | PDF Full-text (2162 KB) | HTML Full-text | XML Full-text
Abstract
In order to reduce power consumption and save storage capacity, we propose a high-resolution sub-Nyquist radar approach based on matrix completion (MC), termed as single-channel sub-Nyquist-MC radars. While providing the high-resolution joint angle-Doppler estimation, this proposed radar approach minimizes the number of samples [...] Read more.
In order to reduce power consumption and save storage capacity, we propose a high-resolution sub-Nyquist radar approach based on matrix completion (MC), termed as single-channel sub-Nyquist-MC radars. While providing the high-resolution joint angle-Doppler estimation, this proposed radar approach minimizes the number of samples in all three dimensions, that is, the range dimension, the pulse dimension (also named temporal dimension), and the spatial dimension. In range dimension, we use a single-channel analog-to-information converter (AIC) to reduce the number of range samples to one; in both spatial and temporal dimensions, we employ a bank of random switch units to regulate the AICs, which greatly reduce the number of spatial-temporal samples. According to the proposed sampling scheme, the samples in digital processing center forwarded by M receive nodes and N pulses are only a subset of the full matrix of size M times N. Under certain conditions and with the knowledge of the sampling scheme, the full matrix can be perfectly recovered by using MC techniques. Based on the recovered full matrix, this paper addresses the problem of the high-resolution joint angle-Doppler estimation by employing compressed sensing (CS) techniques. The properties and performance of the proposed approach are demonstrated via simulations. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Information EISSN 2078-2489 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top