Next Issue
Previous Issue

Table of Contents

Information, Volume 9, Issue 9 (September 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-34
Export citation of selected articles as:
Open AccessArticle Visual Saliency Model-Based Image Watermarking with Laplacian Distribution
Information 2018, 9(9), 239; https://doi.org/10.3390/info9090239
Received: 23 August 2018 / Revised: 10 September 2018 / Accepted: 15 September 2018 / Published: 19 September 2018
PDF Full-text (1507 KB) | HTML Full-text | XML Full-text
Abstract
To improve the invisibility and robustness of the multiplicative watermarking algorithm, an adaptive image watermarking algorithm is proposed based on the visual saliency model and Laplacian distribution in the wavelet domain. The algorithm designs an adaptive multiplicative watermark strength factor by utilizing the
[...] Read more.
To improve the invisibility and robustness of the multiplicative watermarking algorithm, an adaptive image watermarking algorithm is proposed based on the visual saliency model and Laplacian distribution in the wavelet domain. The algorithm designs an adaptive multiplicative watermark strength factor by utilizing the energy aggregation of the high-frequency wavelet sub-band, texture masking and visual saliency characteristics. Then, the image blocks with high-energy are selected as the watermark embedding space to implement the imperceptibility of the watermark. In terms of watermark detection, the Laplacian distribution model is used to model the wavelet coefficients, and a blind watermark detection approach is exploited based on the maximum likelihood scheme. Finally, this paper performs the simulation analysis and comparison of the performance of the proposed algorithm. Experimental results show that the proposed algorithm is robust against additive white Gaussian noise, JPEG compression, median filtering, scaling, rotation attack and other attacks. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Imbalanced Learning Based on Data-Partition and SMOTE
Information 2018, 9(9), 238; https://doi.org/10.3390/info9090238
Received: 5 August 2018 / Revised: 13 September 2018 / Accepted: 17 September 2018 / Published: 19 September 2018
PDF Full-text (905 KB) | HTML Full-text | XML Full-text
Abstract
Classification of data with imbalanced class distribution has encountered a significant drawback by most conventional classification learning methods which assume a relatively balanced class distribution. This paper proposes a novel classification method based on data-partition and SMOTE for imbalanced learning. The proposed method
[...] Read more.
Classification of data with imbalanced class distribution has encountered a significant drawback by most conventional classification learning methods which assume a relatively balanced class distribution. This paper proposes a novel classification method based on data-partition and SMOTE for imbalanced learning. The proposed method differs from conventional ones in both the learning and prediction stages. For the learning stage, the proposed method uses the following three steps to learn a class-imbalance oriented model: (1) partitioning the majority class into several clusters using data partition methods such as K-Means, (2) constructing a novel training set using SMOTE on each data set obtained by merging each cluster with the minority class, and (3) learning a classification model on each training set using convention classification learning methods including decision tree, SVM and neural network. Therefore, a classifier repository consisting of several classification models is constructed. With respect to the prediction stage, for a given example to be classified, the proposed method uses the partition model constructed in the learning stage to select a model from the classifier repository to predict the example. Comprehensive experiments on KEEL data sets show that the proposed method outperforms some other existing methods on evaluation measures of recall, g-mean, f-measure and AUC. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle On Homomorphism Theorem for Perfect Neutrosophic Extended Triplet Groups
Information 2018, 9(9), 237; https://doi.org/10.3390/info9090237
Received: 7 September 2018 / Revised: 11 September 2018 / Accepted: 13 September 2018 / Published: 18 September 2018
PDF Full-text (230 KB) | HTML Full-text | XML Full-text
Abstract
Some homomorphism theorems of neutrosophic extended triplet group (NETG) are proved in the paper [Fundamental homomorphism theorems for neutrosophic extended triplet groups, Symmetry 2018, 10(8), 321; doi:10.3390/sym10080321]. These results are revised in this paper. First, several counterexamples are given to show that some
[...] Read more.
Some homomorphism theorems of neutrosophic extended triplet group (NETG) are proved in the paper [Fundamental homomorphism theorems for neutrosophic extended triplet groups, Symmetry 2018, 10(8), 321; doi:10.3390/sym10080321]. These results are revised in this paper. First, several counterexamples are given to show that some results in the above paper are not true. Second, two new notions of normal NT-subgroup and complete normal NT-subgroup in neutrosophic extended triplet groups are introduced, and their properties are investigated. Third, a new concept of perfect neutrosophic extended triplet group is proposed, and the basic homomorphism theorem of perfect neutrosophic extended triplet groups is established. Full article
(This article belongs to the Section Artificial Intelligence)
Open AccessArticle Fault-Tolerant Anomaly Detection Method in Wireless Sensor Networks
Information 2018, 9(9), 236; https://doi.org/10.3390/info9090236
Received: 27 August 2018 / Accepted: 12 September 2018 / Published: 18 September 2018
PDF Full-text (4478 KB) | HTML Full-text | XML Full-text
Abstract
A key issue in wireless sensor network applications is how to accurately detect anomalies in an unstable environment and determine whether an event has occurred. This instability includes the harsh environment, node energy insufficiency, hardware and software breakdown, etc. In this paper, a
[...] Read more.
A key issue in wireless sensor network applications is how to accurately detect anomalies in an unstable environment and determine whether an event has occurred. This instability includes the harsh environment, node energy insufficiency, hardware and software breakdown, etc. In this paper, a fault-tolerant anomaly detection method (FTAD) is proposed based on the spatial-temporal correlation of sensor networks. This method divides the sensor network into a fault neighborhood, event and fault mixed neighborhood, event boundary neighborhood and other regions for anomaly detection, respectively, to achieve fault tolerance. The results of experiment show that under the condition that 45% of sensor nodes are failing, the hit rate of event detection remains at about 97% and the false negative rate of events is above 92%. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle A Large Effective Touchscreen Using a Head-Mounted Projector
Information 2018, 9(9), 235; https://doi.org/10.3390/info9090235
Received: 17 August 2018 / Revised: 4 September 2018 / Accepted: 8 September 2018 / Published: 18 September 2018
PDF Full-text (5633 KB) | HTML Full-text | XML Full-text
Abstract
In our previous work, we proposed a user interface in which a user wears a projector and a depth camera on his or her head and performs touch operations on an image projected on a flat surface. By using the head-mounted projector, images
[...] Read more.
In our previous work, we proposed a user interface in which a user wears a projector and a depth camera on his or her head and performs touch operations on an image projected on a flat surface. By using the head-mounted projector, images are always projected in front of the user in the direction of the user’s gaze. The image to be projected is changed according to the user’s head pose so as to fix the superimposed image on the surface, which realizes a large effective screen size. In this paper, we conducted an experiment for evaluating the accuracy of registration by measuring the positional and rotational errors between the real world and the superimposed image using our experimental system. As a result, the mean absolute errors of translation were about 10 mm when the user stopped his head, and the delay was estimated to be about 0.2 s. We also discuss the limitations of our prototype and show the direction of future development. Full article
(This article belongs to the Special Issue Wearable Augmented and Mixed Reality Applications)
Figures

Figure 1

Open AccessArticle A New Nearest Centroid Neighbor Classifier Based on K Local Means Using Harmonic Mean Distance
Information 2018, 9(9), 234; https://doi.org/10.3390/info9090234
Received: 24 August 2018 / Revised: 5 September 2018 / Accepted: 13 September 2018 / Published: 14 September 2018
PDF Full-text (955 KB) | HTML Full-text | XML Full-text
Abstract
The K-nearest neighbour classifier is very effective and simple non-parametric technique in pattern classification; however, it only considers the distance closeness, but not the geometricalplacement of the k neighbors. Also, its classification performance is highly influenced by the neighborhood size k and existing
[...] Read more.
The K-nearest neighbour classifier is very effective and simple non-parametric technique in pattern classification; however, it only considers the distance closeness, but not the geometricalplacement of the k neighbors. Also, its classification performance is highly influenced by the neighborhood size k and existing outliers. In this paper, we propose a new local mean based k-harmonic nearest centroid neighbor (LMKHNCN) classifier in orderto consider both distance-based proximity, as well as spatial distribution of k neighbors. In our method, firstly the k nearest centroid neighbors in each class are found which are used to find k different local mean vectors, and then employed to compute their harmonic mean distance to the query sample. Lastly, the query sample is assigned to the class with minimum harmonic mean distance. The experimental results based on twenty-six real-world datasets shows that the proposed LMKHNCN classifier achieves lower error rates, particularly in small sample-size situations, and that it is less sensitive to parameter k when compared to therelated four KNN-based classifiers. Full article
Figures

Figure 1a

Open AccessArticle MODC: A Pareto-Optimal Optimization Approach for Network Traffic Classification Based on the Divide and Conquer Strategy
Information 2018, 9(9), 233; https://doi.org/10.3390/info9090233
Received: 13 August 2018 / Revised: 10 September 2018 / Accepted: 11 September 2018 / Published: 13 September 2018
PDF Full-text (726 KB) | HTML Full-text | XML Full-text
Abstract
Network traffic classification aims to identify categories of traffic or applications of network packets or flows. It is an area that continues to gain attention by researchers due to the necessity of understanding the composition of network traffics, which changes over time, to
[...] Read more.
Network traffic classification aims to identify categories of traffic or applications of network packets or flows. It is an area that continues to gain attention by researchers due to the necessity of understanding the composition of network traffics, which changes over time, to ensure the network Quality of Service (QoS). Among the different methods of network traffic classification, the payload-based one (DPI) is the most accurate, but presents some drawbacks, such as the inability of classifying encrypted data, the concerns regarding the users’ privacy, the high computational costs, and ambiguity when multiple signatures might match. For that reason, machine learning methods have been proposed to overcome these issues. This work proposes a Multi-Objective Divide and Conquer (MODC) model for network traffic classification, by combining, into a hybrid model, supervised and unsupervised machine learning algorithms, based on the divide and conquer strategy. Additionally, it is a flexible model since it allows network administrators to choose between a set of parameters (pareto-optimal solutions), led by a multi-objective optimization process, by prioritizing flow or byte accuracies. Our method achieved 94.14% of average flow accuracy for the analyzed dataset, outperforming the six DPI-based tools investigated, including two commercial ones, and other machine learning-based methods. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle An Integrated Graph Model for Document Summarization
Information 2018, 9(9), 232; https://doi.org/10.3390/info9090232
Received: 9 July 2018 / Revised: 6 September 2018 / Accepted: 6 September 2018 / Published: 13 September 2018
PDF Full-text (632 KB) | HTML Full-text | XML Full-text
Abstract
Extractive summarization aims to produce a concise version of a document by extracting information-rich sentences from the original texts. The graph-based model is an effective and efficient approach to rank sentences since it is simple and easy to use. However, its performance depends
[...] Read more.
Extractive summarization aims to produce a concise version of a document by extracting information-rich sentences from the original texts. The graph-based model is an effective and efficient approach to rank sentences since it is simple and easy to use. However, its performance depends heavily on good text representation. In this paper, an integrated graph model (iGraph) for extractive text summarization is proposed. An enhanced embedding model is used to detect the inherent semantic properties at the word level, bigram level and trigram level. Words with part-of-speech (POS) tags, bigrams and trigrams were extracted to train the embedding models. Based on the enhanced embedding vectors, the similarity values between the sentences were calculated in three perspectives. The sentences in the document were treated as vertexes and the similarity between them as edges. As a result, three different types of semantic graphs were obtained for every document, with the same nodes and different edges. These three graphs were integrated into one enriched semantic graph in a naive Bayesian fashion. After that, TextRank, which is a graph-based ranking algorithm, was applied to rank the sentences, before the top scored sentences were selected for the summary according to the compression rate. Evaluated on the DUC 2002 and DUC 2004 datasets, our proposed method shows competitive performance compared to the state-of-the-art methods. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle CryptoKnight: Generating and Modelling Compiled Cryptographic Primitives
Information 2018, 9(9), 231; https://doi.org/10.3390/info9090231
Received: 12 July 2018 / Revised: 3 September 2018 / Accepted: 6 September 2018 / Published: 10 September 2018
PDF Full-text (411 KB) | HTML Full-text | XML Full-text
Abstract
Cryptovirological augmentations present an immediate, incomparable threat. Over the last decade, the substantial proliferation of crypto-ransomware has had widespread consequences for consumers and organisations alike. Established preventive measures perform well, however, the problem has not ceased. Reverse engineering potentially malicious software is a
[...] Read more.
Cryptovirological augmentations present an immediate, incomparable threat. Over the last decade, the substantial proliferation of crypto-ransomware has had widespread consequences for consumers and organisations alike. Established preventive measures perform well, however, the problem has not ceased. Reverse engineering potentially malicious software is a cumbersome task due to platform eccentricities and obfuscated transmutation mechanisms, hence requiring smarter, more efficient detection strategies. The following manuscript presents a novel approach for the classification of cryptographic primitives in compiled binary executables using deep learning. The model blueprint, a Dynamic Convolutional Neural Network (DCNN), is fittingly configured to learn from variable-length control flow diagnostics output from a dynamic trace. To rival the size and variability of equivalent datasets, and to adequately train our model without risking adverse exposure, a methodology for the procedural generation of synthetic cryptographic binaries is defined, using core primitives from OpenSSL with multivariate obfuscation, to draw a vastly scalable distribution. The library, CryptoKnight, rendered an algorithmic pool of AES, RC4, Blowfish, MD5 and RSA to synthesise combinable variants which automatically fed into its core model. Converging at 96% accuracy, CryptoKnight was successfully able to classify the sample pool with minimal loss and correctly identified the algorithm in a real-world crypto-ransomware application. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Vital, Sophia, and Co.—The Quest for the Legal Personhood of Robots
Information 2018, 9(9), 230; https://doi.org/10.3390/info9090230
Received: 26 July 2018 / Revised: 5 September 2018 / Accepted: 7 September 2018 / Published: 10 September 2018
PDF Full-text (206 KB) | HTML Full-text | XML Full-text
Abstract
The paper examines today’s debate on the legal status of AI robots, and how often scholars and policy makers confuse the legal agenthood of these artificial agents with the status of legal personhood. By taking into account current trends in the field, the
[...] Read more.
The paper examines today’s debate on the legal status of AI robots, and how often scholars and policy makers confuse the legal agenthood of these artificial agents with the status of legal personhood. By taking into account current trends in the field, the paper suggests a twofold stance. First, policy makers shall seriously mull over the possibility of establishing novel forms of accountability and liability for the activities of AI robots in contracts and business law, e.g., new forms of legal agenthood in cases of complex distributed responsibility. Second, any hypothesis of granting AI robots full legal personhood has to be discarded in the foreseeable future. However, how should we deal with Sophia, which became the first AI application to receive citizenship of any country, namely, Saudi Arabia, in October 2017? Admittedly, granting someone, or something, legal personhood is—as always has been—a highly sensitive political issue that does not simply hinge on rational choices and empirical evidence. Discretion, arbitrariness, and even bizarre decisions play a role in this context. However, the normative reasons why legal systems grant human and artificial entities, such as corporations, their status, help us taking sides in today’s quest for the legal personhood of AI robots. Is citizen Sophia really conscious, or capable of suffering the slings and arrows of outrageous scholars? Full article
(This article belongs to the Special Issue ROBOETHICS)
Open AccessArticle Perceptual Hashing Based Forensics Scheme for the Integrity Authentication of High Resolution Remote Sensing Image
Information 2018, 9(9), 229; https://doi.org/10.3390/info9090229
Received: 14 August 2018 / Revised: 2 September 2018 / Accepted: 5 September 2018 / Published: 7 September 2018
PDF Full-text (4639 KB) | HTML Full-text | XML Full-text
Abstract
High resolution remote sensing (HRRS) images are widely used in many sensitive fields, and their security should be protected thoroughly. Integrity authentication is one of their major security problems, while the traditional techniques cannot fully meet the requirements. In this paper, a perceptual
[...] Read more.
High resolution remote sensing (HRRS) images are widely used in many sensitive fields, and their security should be protected thoroughly. Integrity authentication is one of their major security problems, while the traditional techniques cannot fully meet the requirements. In this paper, a perceptual hashing based forensics scheme is proposed for the integrity authentication of a HRRS image. The proposed scheme firstly partitions the HRRS image into grids and adaptively pretreats the grid cells according to the entropy. Secondly, the multi-scale edge features of the grid cells are extracted by the edge chains based on the adaptive strategy. Thirdly, principal component analysis (PCA) is applied on the extracted edge feature to get robust feature, which is then normalized and encrypted with secret key set by the user to receive the perceptual hash sequence. The integrity authentication procedure is achieved via the comparison between the recomputed perceptual hash sequence and the original one. Experimental results have shown that the proposed scheme has good robustness to normal content-preserving manipulations, has good sensitivity to detect local subtle and illegal tampering of the HRRS image, and has the ability to locate the tampering area. Full article
(This article belongs to the Special Issue The Security and Digital Forensics of Cloud Computing)
Figures

Figure 1

Open AccessArticle A Web Page Clustering Method Based on Formal Concept Analysis
Information 2018, 9(9), 228; https://doi.org/10.3390/info9090228
Received: 21 August 2018 / Revised: 31 August 2018 / Accepted: 2 September 2018 / Published: 6 September 2018
PDF Full-text (1993 KB) | HTML Full-text | XML Full-text
Abstract
Web page clustering is an important technology for sorting network resources. By extraction and clustering based on the similarity of the Web page, a large amount of information on a Web page can be organized effectively. In this paper, after describing the extraction
[...] Read more.
Web page clustering is an important technology for sorting network resources. By extraction and clustering based on the similarity of the Web page, a large amount of information on a Web page can be organized effectively. In this paper, after describing the extraction of Web feature words, calculation methods for the weighting of feature words are studied deeply. Taking Web pages as objects and Web feature words as attributes, a formal context is constructed for using formal concept analysis. An algorithm for constructing a concept lattice based on cross data links was proposed and was successfully applied. This method can be used to cluster the Web pages using the concept lattice hierarchy. Experimental results indicate that the proposed algorithm is better than previous competitors with regard to time consumption and the clustering effect. Full article
Figures

Figure 1

Open AccessArticle Feature Selection and Recognition Methods for Discovering Physiological and Bioinformatics RESTful Services
Information 2018, 9(9), 227; https://doi.org/10.3390/info9090227
Received: 26 July 2018 / Revised: 24 August 2018 / Accepted: 28 August 2018 / Published: 6 September 2018
PDF Full-text (2568 KB) | HTML Full-text | XML Full-text
Abstract
Many physiology and bioinformatics research institutions and websites have opened their own data analysis services and other related Web services. It is therefore very important to be able to quickly and effectively select and extract features from the Web service pages to learn
[...] Read more.
Many physiology and bioinformatics research institutions and websites have opened their own data analysis services and other related Web services. It is therefore very important to be able to quickly and effectively select and extract features from the Web service pages to learn about and use these services. This facilitates the automatic discovery and recognition of Representational State Transfer or RESTful services. However, this task is still challenging. Following the description feature pattern of a RESTful service, the authors proposed a Feature Pattern Search and Replace (FPSR) method. First, they applied a regular expression to perform a matching lookup. Then, a custom string was used to substitute the relevant feature pattern to avoid the segmentation of its feature pattern and the loss of its feature information during the segmentation process. Experimental results showed in the visualization that FPSR obtained a clearer and more obvious boundary with fewer overlaps than the test without using FPSR, thereby enabling a higher accuracy rate. Therefore, FPSR allowed the authors to extract RESTful service page feature information and achieve better classification results. Full article
Figures

Figure 1

Open AccessArticle Hesitant Probabilistic Fuzzy Information Aggregation Using Einstein Operations
Information 2018, 9(9), 226; https://doi.org/10.3390/info9090226
Received: 8 August 2018 / Revised: 1 September 2018 / Accepted: 3 September 2018 / Published: 4 September 2018
PDF Full-text (331 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a hesitant probabilistic fuzzy multiple attribute group decision making is studied. First, some Einstein operations on hesitant probability fuzzy elements such as the Einstein sum, Einstein product, and Einstein scalar multiplication are presented and their properties are discussed. Then, several
[...] Read more.
In this paper, a hesitant probabilistic fuzzy multiple attribute group decision making is studied. First, some Einstein operations on hesitant probability fuzzy elements such as the Einstein sum, Einstein product, and Einstein scalar multiplication are presented and their properties are discussed. Then, several hesitant probabilistic fuzzy Einstein aggregation operators, including the hesitant probabilistic fuzzy Einstein weighted averaging operator and the hesitant probabilistic fuzzy Einstein weighted geometric operator and so on, are introduced. Moreover, some desirable properties and special cases are investigated. It is shown that some existing hesitant fuzzy aggregation operators and hesitant probabilistic fuzzy aggregation operators are special cases of the proposed operators. Further, based on the proposed operators, a new approach of hesitant probabilistic fuzzy multiple attribute decision making is developed. Finally, a practical example is provided to illustrate the developed approach. Full article
Open AccessEditorial Editorial for the Special Issue on ‘Agent-Based Artificial Markets’
Information 2018, 9(9), 225; https://doi.org/10.3390/info9090225
Received: 3 September 2018 / Revised: 3 September 2018 / Accepted: 3 September 2018 / Published: 3 September 2018
PDF Full-text (137 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Agent-Based Artificial Markets)
Open AccessArticle Reliable Delay Based Algorithm to Boost PUF Security Against Modeling Attacks
Information 2018, 9(9), 224; https://doi.org/10.3390/info9090224
Received: 6 July 2018 / Revised: 15 August 2018 / Accepted: 18 August 2018 / Published: 3 September 2018
PDF Full-text (839 KB) | HTML Full-text | XML Full-text
Abstract
Silicon Physical Unclonable Functions (sPUFs) are one of the security primitives and state-of-the-art topics in hardware-oriented security and trust research. This paper presents an efficient and dynamic ring oscillator PUFs (d-ROPUFs) technique to improve sPUFs security against modeling attacks. In addition to enhancing
[...] Read more.
Silicon Physical Unclonable Functions (sPUFs) are one of the security primitives and state-of-the-art topics in hardware-oriented security and trust research. This paper presents an efficient and dynamic ring oscillator PUFs (d-ROPUFs) technique to improve sPUFs security against modeling attacks. In addition to enhancing the Entropy of weak ROPUF design, experimental results show that the proposed d-ROPUF technique allows the generation of larger and updated challenge-response pairs (CRP space) compared with simple ROPUF. Additionally, an innovative hardware-oriented security algorithm, namely, the Optimal Time Delay Algorithm (OTDA), is proposed. It is demonstrated that the OTDA algorithm significantly improves PUF reliability under varying operating conditions. Further, it is shown that the OTDA further efficiently enhances the d-ROPUF capability to generate a considerably large set of reliable secret keys to protect the PUF structure from new cyber-attacks, including machine learning and modeling attacks. Full article
Figures

Figure 1

Open AccessArticle The CLoTH Simulator for HTLC Payment Networks with Introductory Lightning Network Performance Results
Information 2018, 9(9), 223; https://doi.org/10.3390/info9090223
Received: 31 July 2018 / Revised: 25 August 2018 / Accepted: 30 August 2018 / Published: 3 September 2018
PDF Full-text (1746 KB) | HTML Full-text | XML Full-text
Abstract
The Lightning Network (LN) is one of the most promising off-chain scaling solutions for Bitcoin, as it enables off-chain payments which are not subject to the well-known blockchain scalability limit. In this work, we introduce CLoTH, a simulator for HTLC payment networks (of
[...] Read more.
The Lightning Network (LN) is one of the most promising off-chain scaling solutions for Bitcoin, as it enables off-chain payments which are not subject to the well-known blockchain scalability limit. In this work, we introduce CLoTH, a simulator for HTLC payment networks (of which LN is the best working example). It simulates input-defined payments on an input-defined HTLC network and produces performance measures in terms of payment-related statistics (such as time to complete payments and probability of payment failure). CLoTH helps to predict issues and obstacles that might emerge in the development stages of an HTLC payment network and to estimate the effects of an optimisation action before deploying it. We conducted simulations on a recent snapshot of the HTLC payment network of LN. These simulations allowed us to identify network and payments configurations for which a payment is more likely to fail than to succeed. We proposed viable solutions to avoid such configurations. Full article
(This article belongs to the Special Issue BlockChain and Smart Contracts)
Figures

Figure 1

Open AccessArticle Semantic Clustering of Functional Requirements Using Agglomerative Hierarchical Clustering
Information 2018, 9(9), 222; https://doi.org/10.3390/info9090222
Received: 31 July 2018 / Revised: 25 August 2018 / Accepted: 29 August 2018 / Published: 3 September 2018
PDF Full-text (667 KB) | HTML Full-text | XML Full-text
Abstract
Software applications have become a fundamental part in the daily work of modern society as they meet different needs of users in different domains. Such needs are known as software requirements (SRs) which are separated into functional (software services) and non-functional (quality attributes).
[...] Read more.
Software applications have become a fundamental part in the daily work of modern society as they meet different needs of users in different domains. Such needs are known as software requirements (SRs) which are separated into functional (software services) and non-functional (quality attributes). The first step of every software development project is SR elicitation. This step is a challenge task for developers as they need to understand and analyze SRs manually. For example, the collected functional SRs need to be categorized into different clusters to break-down the project into a set of sub-projects with related SRs and devote each sub-project to a separate development team. However, functional SRs clustering has never been considered in the literature. Therefore, in this paper, we propose an approach to automatically cluster functional requirements based on semantic measure. An empirical evaluation is conducted using four open-access software projects to evaluate our proposal. The experimental results demonstrate that the proposed approach identifies semantic clusters according to well-known used measures in the subject. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessCommentary Love, Emotion and the Singularity
Information 2018, 9(9), 221; https://doi.org/10.3390/info9090221
Received: 31 July 2018 / Revised: 23 August 2018 / Accepted: 31 August 2018 / Published: 3 September 2018
PDF Full-text (222 KB) | HTML Full-text | XML Full-text
Abstract
Proponents of the singularity hypothesis have argued that there will come a point at which machines will overtake us not only in intelligence but that machines will also have emotional capabilities. However, human cognition is not something that takes place only in the
[...] Read more.
Proponents of the singularity hypothesis have argued that there will come a point at which machines will overtake us not only in intelligence but that machines will also have emotional capabilities. However, human cognition is not something that takes place only in the brain; one cannot conceive of human cognition without embodiment. This essay considers the emotional nature of cognition by exploring the most human of emotions—romantic love. By examining the idea of love from an evolutionary and a physiological perspective, the author suggests that in order to account for the full range of human cognition, one must also account for the emotional aspects of cognition. The paper concludes that if there is to be a singularity that transcends human cognition, it must be embodied. As such, the singularity could not be completely non-organic; it must take place in the form of a cyborg, wedding the digital to the biological. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle Low-Complexity Blind Selected Mapping Scheme for Peak-to-Average Power Ratio Reduction in Orthogonal Frequency-Division Multiplexing Systems
Information 2018, 9(9), 220; https://doi.org/10.3390/info9090220
Received: 7 August 2018 / Revised: 22 August 2018 / Accepted: 29 August 2018 / Published: 31 August 2018
Cited by 1 | PDF Full-text (1408 KB) | HTML Full-text | XML Full-text
Abstract
Orthogonal frequency-division multiplexing (OFDM) is an attractive multicarrier technique for the simplicity of equalization and high data throughput. However, the transmitted OFDM signal has a very high peak-to-average power ratio (PAPR), which severely degrades the performance of practical OFDM systems and reduces the
[...] Read more.
Orthogonal frequency-division multiplexing (OFDM) is an attractive multicarrier technique for the simplicity of equalization and high data throughput. However, the transmitted OFDM signal has a very high peak-to-average power ratio (PAPR), which severely degrades the performance of practical OFDM systems and reduces the efficiency of high-power amplifiers (HPA). The selected mapping (SLM) scheme is an effective PAPR reduction method of OFDM signals. However, this approach usually requires side information (SI) transmission, which increases the difficulty of the hardware implementation with high complexity and reduces the data transmission rate. In this paper, based on designing phase rotation vectors in the time domain, a novel blind SLM method with low complexity is proposed to reduce the PAPR of OFDM signals. At the transmitter, the proposed method properly designs the phase rotation vectors in the time domain, which can be considered as an equivalent wireless channel without SI transmission. At the receiver, the effect of phase rotation vectors can be removed by the conventional channel estimation method, and the data demodulation processing can be easily performed by the frequency domain equalization. Simulation results show that the proposed scheme can achieve low complexity in PAPR reduction and has great robustness in bit error rate (BER) performance compared to the other low-complexity SLM PAPR schemes. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Predictors of Chinese Users’ Location Disclosure Behavior: An Empirical Study on WeChat
Information 2018, 9(9), 219; https://doi.org/10.3390/info9090219
Received: 12 August 2018 / Accepted: 28 August 2018 / Published: 30 August 2018
PDF Full-text (1117 KB) | HTML Full-text | XML Full-text
Abstract
Location disclosure behavior on social network sites (SNS) has developed rapidly. However, the influencing factors have not been adequately studied. Based on social cognitive theory and the concept of face, this study developed a research model to explain the factors with uniquely Chinese
[...] Read more.
Location disclosure behavior on social network sites (SNS) has developed rapidly. However, the influencing factors have not been adequately studied. Based on social cognitive theory and the concept of face, this study developed a research model to explain the factors with uniquely Chinese characteristics that predict WeChat users’ location disclosure. Using survey data collected from WeChat users in China (N = 545), the model is tested by a structural equation modeling (SEM). The results show that a desire to gain face, a fear of losing face, social norms, trust in SNS members and trust in an SNS provider positively influence WeChat users’ intention to disclose location information. Moreover, trust in SNS members can also boost trust in an SNS provider. Finally, both theoretical contributions and practical implications are discussed. Full article
(This article belongs to the Special Issue Information Management in Information Age)
Figures

Figure 1

Open AccessArticle Community Detection Based on Differential Evolution Using Modularity Density
Information 2018, 9(9), 218; https://doi.org/10.3390/info9090218
Received: 25 June 2018 / Revised: 20 August 2018 / Accepted: 23 August 2018 / Published: 30 August 2018
PDF Full-text (2573 KB) | HTML Full-text | XML Full-text
Abstract
Currently, many community detection methods are proposed in the network science field. However, most contemporary methods only employ modularity to detect communities, which may not be adequate to represent the real community structure of networks for its resolution limit problem. In order to
[...] Read more.
Currently, many community detection methods are proposed in the network science field. However, most contemporary methods only employ modularity to detect communities, which may not be adequate to represent the real community structure of networks for its resolution limit problem. In order to resolve this problem, we put forward a new community detection approach based on a differential evolution algorithm (CDDEA), taking into account modularity density as an optimized function. In the CDDEA, a new tuning parameter is used to recognize different communities. The experimental results on synthetic and real-world networks show that the proposed algorithm provides an effective method in discovering community structure in complex networks. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessArticle Skeleton to Abstraction: An Attentive Information Extraction Schema for Enhancing the Saliency of Text Summarization
Information 2018, 9(9), 217; https://doi.org/10.3390/info9090217
Received: 14 July 2018 / Revised: 19 August 2018 / Accepted: 28 August 2018 / Published: 29 August 2018
PDF Full-text (3233 KB) | HTML Full-text | XML Full-text
Abstract
Current popular abstractive summarization is based on an attentional encoder-decoder framework. Based on the architecture, the decoder generates a summary according to the full text that often results in the decoder being interfered by some irrelevant information, thereby causing the generated summaries to
[...] Read more.
Current popular abstractive summarization is based on an attentional encoder-decoder framework. Based on the architecture, the decoder generates a summary according to the full text that often results in the decoder being interfered by some irrelevant information, thereby causing the generated summaries to suffer from low saliency. Besides, we have observed the process of people writing summaries and find that they write a summary based on the necessary information rather than the full text. Thus, in order to enhance the saliency of the abstractive summarization, we propose an attentive information extraction model. It consists of a multi-layer perceptron (MLP) gated unit that pays more attention to the important information of the source text and a similarity module to encourage high similarity between the reference summary and the important information. Before the summary decoder, the MLP and the similarity module work together to extract the important information for the decoder, thus obtaining the skeleton of the source text. This effectively reduces the interference of irrelevant information to the decoder, therefore improving the saliency of the summary. Our proposed model was tested on CNN/Daily Mail and DUC-2004 datasets, and achieved a 42.01 ROUGE-1 f-score and 33.94 ROUGE-1, recall respectively. The result outperforms the state-of-the-art abstractive model on the same dataset. In addition, by subjective human evaluation, the saliency of the generated summaries was further enhanced. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle Composite Numbers That Give Valid RSA Key Pairs for Any Coprime p
Information 2018, 9(9), 216; https://doi.org/10.3390/info9090216
Received: 13 August 2018 / Revised: 23 August 2018 / Accepted: 25 August 2018 / Published: 28 August 2018
PDF Full-text (756 KB) | HTML Full-text | XML Full-text
Abstract
RSA key pairs are normally generated from two large primes p and q. We consider what happens if they are generated from two integers s and r, where r is prime, but unbeknownst to the user, s is not. Under most
[...] Read more.
RSA key pairs are normally generated from two large primes p and q. We consider what happens if they are generated from two integers s and r, where r is prime, but unbeknownst to the user, s is not. Under most circumstances, the correctness of encryption and decryption depends on the choice of the public and private exponents e and d. In some cases, specific ( s , r ) pairs can be found for which encryption and decryption will be correct for any ( e , d ) exponent pair. Certain s exist, however, for which encryption and decryption are correct for any odd prime r s . We give necessary and sufficient conditions for s with this property. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle Metric Learning with Dynamically Generated Pairwise Constraints for Ear Recognition
Information 2018, 9(9), 215; https://doi.org/10.3390/info9090215
Received: 8 August 2018 / Revised: 23 August 2018 / Accepted: 23 August 2018 / Published: 27 August 2018
PDF Full-text (1021 KB) | HTML Full-text | XML Full-text
Abstract
The ear recognition task is known as predicting whether two ear images belong to the same person or not. More recently, most ear recognition methods have started based on deep learning features that can achieve a good accuracy, but it requires more resources
[...] Read more.
The ear recognition task is known as predicting whether two ear images belong to the same person or not. More recently, most ear recognition methods have started based on deep learning features that can achieve a good accuracy, but it requires more resources in the training phase and suffer from time-consuming computational complexity. On the other hand, descriptor features and metric learning play a vital role and also provide excellent performance in many computer vision applications, such as face recognition and image classification. Therefore, in this paper, we adopt the descriptor features and present a novel metric learning method that is efficient in matching real-time for ear recognition system. This method is formulated as a pairwise constrained optimization problem. In each training cycle, this method selects the nearest similar and dissimilar neighbors of each sample to construct the pairwise constraints and then solves the optimization problem by the iterated Bregman projections. Experiments are conducted on Annotated Web Ears (AWE) database, West Pommeranian University of Technology (WPUT), the University of Science and Technology Beijing II (USTB II), and Mathematical Analysis of Images (AMI) databases.. The results show that the proposed approach can achieve promising recognition rates in ear recognition, and its training process is much more efficient than the other competing metric learning methods. Full article
Figures

Figure 1

Open AccessArticle Individual Security and Network Design with Malicious Nodes
Information 2018, 9(9), 214; https://doi.org/10.3390/info9090214
Received: 15 August 2018 / Revised: 21 August 2018 / Accepted: 23 August 2018 / Published: 25 August 2018
PDF Full-text (361 KB) | HTML Full-text | XML Full-text
Abstract
Networks are beneficial to those being connected but can also be used as carriers of contagious hostile attacks. These attacks are often facilitated by exploiting corrupt network users. To protect against the attacks, users can resort to costly defense. The decentralized nature of
[...] Read more.
Networks are beneficial to those being connected but can also be used as carriers of contagious hostile attacks. These attacks are often facilitated by exploiting corrupt network users. To protect against the attacks, users can resort to costly defense. The decentralized nature of such protection is known to be inefficient, but the inefficiencies can be mitigated by a careful network design. Is network design still effective when not all users can be trusted? We propose a model of network design and defense with byzantine nodes to address this question. We study the optimal defended networks in the case of centralized defense and, for the case of decentralized defense, we show that the inefficiencies due to decentralization can be mitigated arbitrarily well when the number of nodes in the network is sufficiently large, despite the presence of the byzantine nodes. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessReview A Systematic Review of Finger Vein Recognition Techniques
Information 2018, 9(9), 213; https://doi.org/10.3390/info9090213
Received: 26 July 2018 / Revised: 14 August 2018 / Accepted: 14 August 2018 / Published: 24 August 2018
PDF Full-text (1293 KB) | HTML Full-text | XML Full-text
Abstract
Biometric identification is the study of physiological and behavioral attributes of an individual to overcome security problems. Finger vein recognition is a biometric technique used to analyze finger vein patterns of persons for proper authentication. This paper presents a detailed review on finger
[...] Read more.
Biometric identification is the study of physiological and behavioral attributes of an individual to overcome security problems. Finger vein recognition is a biometric technique used to analyze finger vein patterns of persons for proper authentication. This paper presents a detailed review on finger vein recognition algorithms. Such tools include image acquisition, preprocessing, feature extraction and matching methods to extract and analyze object patterns. In addition, we list some novel findings after the critical comparative analysis of the highlighted techniques. The comparative studies indicate that the accuracy of finger vein identification methods is up to the mark. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle The (T, L)-Path Model and Algorithms for Information Dissemination in Dynamic Networks
Information 2018, 9(9), 212; https://doi.org/10.3390/info9090212
Received: 5 July 2018 / Revised: 16 August 2018 / Accepted: 19 August 2018 / Published: 24 August 2018
PDF Full-text (968 KB) | HTML Full-text | XML Full-text
Abstract
A dynamic network is the abstraction of distributed systems with frequent network topology changes. With such dynamic network models, fundamental distributed computing problems can be formally studied with rigorous correctness. Although quite a number of models have been proposed and studied for dynamic
[...] Read more.
A dynamic network is the abstraction of distributed systems with frequent network topology changes. With such dynamic network models, fundamental distributed computing problems can be formally studied with rigorous correctness. Although quite a number of models have been proposed and studied for dynamic networks, the existing models are usually defined from the point of view of connectivity properties. In this paper, instead, we examine the dynamicity of network topology according to the procedure of changes, i.e., how the topology or links change. Following such an approach, we propose the notion of the “instant path” and define two dynamic network models based on the instant path. Based on these two models, we design distributed algorithms for the problem of information dissemination respectively, one of the fundamental distributing computing problems. The correctness of our algorithms is formally proved and their performance in time cost and communication cost is analyzed. Compared with existing connectivity based dynamic network models and algorithms, our procedure based ones are definitely easier to be instantiated in the practical design and deployment of dynamic networks. Full article
Figures

Figure 1

Open AccessArticle Imprecise Bayesian Networks as Causal Models
Information 2018, 9(9), 211; https://doi.org/10.3390/info9090211
Received: 12 July 2018 / Revised: 15 August 2018 / Accepted: 20 August 2018 / Published: 23 August 2018
PDF Full-text (271 KB) | HTML Full-text | XML Full-text
Abstract
This article considers the extent to which Bayesian networks with imprecise probabilities, which are used in statistics and computer science for predictive purposes, can be used to represent causal structure. It is argued that the adequacy conditions for causal representation in the precise
[...] Read more.
This article considers the extent to which Bayesian networks with imprecise probabilities, which are used in statistics and computer science for predictive purposes, can be used to represent causal structure. It is argued that the adequacy conditions for causal representation in the precise context—the Causal Markov Condition and Minimality—do not readily translate into the imprecise context. Crucial to this argument is the fact that the independence relation between random variables can be understood in several different ways when the joint probability distribution over those variables is imprecise, none of which provides a compelling basis for the causal interpretation of imprecise Bayes nets. I conclude that there are serious limits to the use of imprecise Bayesian networks to represent causal structure. Full article
(This article belongs to the Special Issue Probabilistic Causal Modelling in Intelligent Systems)
Figures

Figure 1

Open AccessArticle Time-Varying Communication Channel High Altitude Platform Station Link Budget and Channel Modeling
Information 2018, 9(9), 210; https://doi.org/10.3390/info9090210
Received: 14 August 2018 / Revised: 18 August 2018 / Accepted: 20 August 2018 / Published: 22 August 2018
PDF Full-text (3190 KB) | HTML Full-text | XML Full-text
Abstract
Because of the high BER (Bit Error Rate), low time delay and low channel transmission efficiency of HAPS (High Altitude Platform Station) in the near space. The link budget of HAPS and channel model are proposed in this paper. According to the channel
[...] Read more.
Because of the high BER (Bit Error Rate), low time delay and low channel transmission efficiency of HAPS (High Altitude Platform Station) in the near space. The link budget of HAPS and channel model are proposed in this paper. According to the channel characteristic, the channel model is set up, combined with different CNR (Carrier Noise Ratio), elevation angle, coding situations of wireless communication link by using Hamming code, PSK (Pulse Shift Keying) and Golay code respectively, then the situations of link quality and BER are analyzed. The simulation results show that the established model of the link budget and channel are suitable for the theoretical analysis results. The elevation of the HAPS communication link is smaller while the BER is increasing. The case of channel in the coding is better than in the un-coded situation. When every bit power and thermal noise power spectral density is larger, the BER of the HAPS communication link is becoming smaller. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Back to Top