Next Issue
Previous Issue

Table of Contents

Information, Volume 9, Issue 4 (April 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-33
Export citation of selected articles as:
Open AccessArticle Genetic Algorithm with an Improved Initial Population Technique for Automatic Clustering of Low-Dimensional Data
Information 2018, 9(4), 101; https://doi.org/10.3390/info9040101
Received: 1 March 2018 / Revised: 17 April 2018 / Accepted: 19 April 2018 / Published: 21 April 2018
Cited by 1 | PDF Full-text (1844 KB) | HTML Full-text | XML Full-text
Abstract
K-means clustering is an important and popular technique in data mining. Unfortunately, for any given dataset (not knowledge-base), it is very difficult for a user to estimate the proper number of clusters in advance, and it also has the tendency of trapping in
[...] Read more.
K-means clustering is an important and popular technique in data mining. Unfortunately, for any given dataset (not knowledge-base), it is very difficult for a user to estimate the proper number of clusters in advance, and it also has the tendency of trapping in local optimum when the initial seeds are randomly chosen. The genetic algorithms (GAs) are usually used to determine the number of clusters automatically and to capture an optimal solution as the initial seeds of K-means clustering or K-means clustering results. However, they typically choose the genes of chromosomes randomly, which results in poor clustering results, whereas a generally selected initial population can improve the final clustering results. Hence, some GA-based techniques carefully select a high-quality initial population with a high complexity. This paper proposed an adaptive GA (AGA) with an improved initial population for K-means clustering (SeedClust). In SeedClust, which is an improved density estimation method and the improved K-means++ are presented to capture higher quality initial seeds and generate the initial population with low complexity, and the adaptive crossover and mutation probability is designed and is then used for premature convergence and to maintain the population diversity, respectively, which can automatically determine the proper number of clusters and capture an improved initial solution. Finally, the best chromosomes (centers) are obtained and are then fed into the K-means as initial seeds to generate even higher quality clustering results by allowing the initial seeds to readjust as needed. Experimental results based on low-dimensional taxi GPS (Global Position System) data sets demonstrate that SeedClust has a higher performance and effectiveness. Full article
Figures

Figure 1

Open AccessArticle Analysis of Document Pre-Processing Effects in Text and Opinion Mining
Information 2018, 9(4), 100; https://doi.org/10.3390/info9040100
Received: 23 February 2018 / Revised: 10 April 2018 / Accepted: 17 April 2018 / Published: 20 April 2018
Cited by 1 | PDF Full-text (1412 KB) | HTML Full-text | XML Full-text
Abstract
Typically, textual information is available as unstructured data, which require processing so that data mining algorithms can handle such data; this processing is known as the pre-processing step in the overall text mining process. This paper aims at analyzing the strong impact that
[...] Read more.
Typically, textual information is available as unstructured data, which require processing so that data mining algorithms can handle such data; this processing is known as the pre-processing step in the overall text mining process. This paper aims at analyzing the strong impact that the pre-processing step has on most mining tasks. Therefore, we propose a methodology to vary distinct combinations of pre-processing steps and to analyze which pre-processing combination allows high precision. In order to show different combinations of pre-processing methods, experiments were performed by comparing some combinations such as stemming, term weighting, term elimination based on low frequency cut and stop words elimination. These combinations were applied in text and opinion mining tasks, from which correct classification rates were computed to highlight the strong impact of the pre-processing combinations. Additionally, we provide graphical representations from each pre-processing combination to show how visual approaches are useful to show the processing effects on document similarities and group formation (i.e., cohesion and separation). Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessEssay The Singularity Isn’t Simple! (However We Look at It) A Random Walk between Science Fiction and Science Fact
Information 2018, 9(4), 99; https://doi.org/10.3390/info9040099
Received: 11 April 2018 / Revised: 17 April 2018 / Accepted: 18 April 2018 / Published: 19 April 2018
Cited by 1 | PDF Full-text (542 KB) | HTML Full-text | XML Full-text
Abstract
It seems to be accepted that intelligenceartificial or otherwise—and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take
[...] Read more.
It seems to be accepted that intelligenceartificial or otherwise—and ‘the singularity’ are inseparable concepts: ‘The singularity’ will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take it from there (and take over from us). However, such wisdom and debate are simplistic in a number of ways: firstly, this is a poor definition of the singularity; secondly, it muddles various notions of intelligence; thirdly, competing arguments are rarely based on shared axioms, so are frequently pointless; fourthly, our models for trying to discuss these concepts at all are often inconsistent; and finally, our attempts at describing any ‘post-singularity’ world are almost always limited by anthropomorphism. In all of these respects, professional ‘futurists’ often appear as confused as storytellers who, through freer licence, may conceivably have the clearer view: perhaps then, that becomes a reasonable place to start. There is no attempt in this paper to propose, or evaluate, any research hypothesis; rather simply to challenge conventions. Using examples from science fiction to illustrate various assumptions behind the AI/singularity debate, this essay seeks to encourage discussion on a number of possible futures based on different underlying metaphysical philosophies. Although properly grounded in science, it eventually looks beyond the technology for answers and, ultimately, beyond the Earth itself. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Chinese Knowledge Base Question Answering by Attention-Based Multi-Granularity Model
Information 2018, 9(4), 98; https://doi.org/10.3390/info9040098
Received: 22 January 2018 / Revised: 15 March 2018 / Accepted: 16 April 2018 / Published: 19 April 2018
PDF Full-text (4361 KB) | HTML Full-text | XML Full-text
Abstract
Chinese knowledge base question answering (KBQA) is designed to answer the questions with the facts contained in a knowledge base. This task can be divided into two subtasks: topic entity extraction and relation selection. During the topic entity extraction stage, an entity extraction
[...] Read more.
Chinese knowledge base question answering (KBQA) is designed to answer the questions with the facts contained in a knowledge base. This task can be divided into two subtasks: topic entity extraction and relation selection. During the topic entity extraction stage, an entity extraction model is built to locate topic entities in questions. The Levenshtein Ratio entity linker is proposed to conduct effective entity linking. All the relevant subject-predicate-object (SPO) triples to topic entity are searched from the knowledge base as candidates. In relation selection, an attention-based multi-granularity interaction model (ABMGIM) is proposed. Two main contributions are as follows. First, a multi-granularity approach for text embedding is proposed. A nested character-level and word-level approach is used to concatenate the pre-trained embedding of a character with corresponding embedding on word-level. Second, we apply a hierarchical matching model for question representation in relation selection tasks, and attention mechanisms are imported for a fine-grained alignment between characters for relation selection. Experimental results show that our model achieves a competitive performance on the public dataset, which demonstrates its effectiveness. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle Scene Semantic Recognition Based on Probability Topic Model
Information 2018, 9(4), 97; https://doi.org/10.3390/info9040097
Received: 4 February 2018 / Revised: 15 April 2018 / Accepted: 15 April 2018 / Published: 19 April 2018
PDF Full-text (3596 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, scene semantic recognition has become the most exciting and fastest growing research topic. Lots of scene semantic analysis methods thus have been proposed for better scene content interpretation. By using latent Dirichlet allocation (LDA) to deduce the effective topic features,
[...] Read more.
In recent years, scene semantic recognition has become the most exciting and fastest growing research topic. Lots of scene semantic analysis methods thus have been proposed for better scene content interpretation. By using latent Dirichlet allocation (LDA) to deduce the effective topic features, the accuracy of image semantic recognition has been significantly improved. Besides, the method of extracting deep features by layer-by-layer iterative computation using convolutional neural networks (CNNs) has achieved great success in image recognition. The paper proposes a method called DF-LDA, which is a hybrid supervised–unsupervised method combined CNNs with LDA to extract image topics. This method uses CNNs to explore visual features that are more suitable for scene images, and group the features of salient semantics into visual topics through topic models. In contrast to the LDA as a tool for simply extracting image semantics, our approach achieves better performance on three datasets that contain various scene categories. Full article
Figures

Figure 1

Open AccessArticle Hierarchical Guidance Strategy and Exemplar-Based Image Inpainting
Information 2018, 9(4), 96; https://doi.org/10.3390/info9040096
Received: 18 March 2018 / Revised: 9 April 2018 / Accepted: 15 April 2018 / Published: 18 April 2018
PDF Full-text (16789 KB) | HTML Full-text | XML Full-text
Abstract
To solve the issue that it is difficult to maintain the consistency of linear structures when filling large regions by the exemplar-based technique, a hierarchical guidance strategy and exemplar-based image inpainting technique is proposed. The inpainting process is as follows: (i) the multi-layer
[...] Read more.
To solve the issue that it is difficult to maintain the consistency of linear structures when filling large regions by the exemplar-based technique, a hierarchical guidance strategy and exemplar-based image inpainting technique is proposed. The inpainting process is as follows: (i) the multi-layer resolution images are firstly acquired through decomposing of the pyramid on the target image; (ii) the top-layer inpainted image, the beginning of the inpainting from the top layer, is generated by the exemplar-based technique; (iii) there is a combined result between the next layer of the top image and the up-sampling output on the top-layer inpainted image, and the target regions are filled with information as guidance data; (iv) this process is repeated until the inpainting of all layers have been completed. Our results were compared to those obtained by existing techniques, and our proposed technique maintained the consistency of linear structures in a visually plausible way. Objectively, we choose SSIM (structural similarity index measurement) and PSNR (peak signal-to-noise ratio) as the measurement indices. Since the values of SSIM are well reflected when compared with other techniques, our technique clearly demonstrated that our approach is better able to maintain the consistency of linear structures. The core of our algorithm is to fill large regions whether they are synthesis images or real-scene photographs. It is easy to apply in practice, with the goal of having plausible inpainted image. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle CSI Frequency Domain Fingerprint-Based Passive Indoor Human Detection
Information 2018, 9(4), 95; https://doi.org/10.3390/info9040095
Received: 4 April 2018 / Revised: 12 April 2018 / Accepted: 15 April 2018 / Published: 17 April 2018
PDF Full-text (582 KB) | HTML Full-text | XML Full-text
Abstract
Passive indoor personnel detection technology is now a hot topic. Existing methods have been greatly influenced by environmental changes, and there are problems with the accuracy and robustness of detection. Passive personnel detection based on Wi-Fi not only solves the above problems, but
[...] Read more.
Passive indoor personnel detection technology is now a hot topic. Existing methods have been greatly influenced by environmental changes, and there are problems with the accuracy and robustness of detection. Passive personnel detection based on Wi-Fi not only solves the above problems, but also has the advantages of being low cost and easy to implement, and can be better applied to elderly care and safety monitoring. In this paper, we propose a passive indoor personnel detection method based on Wi-Fi, which we call FDF-PIHD (Frequency Domain Fingerprint-based Passive Indoor Human Detection). Through this method, fine-grained physical layer Channel State Information (CSI) can be extracted to generate feature fingerprints so as to help determine the state in the scene by matching online fingerprints with offline fingerprints. In order to improve accuracy, we combine the detection results of three receiving antennas to obtain the final test result. The experimental results show that the detection rates of our proposed scheme all reach above 90%, no matter whether the scene is human-free, stationary or a moving human presence. In addition, it can not only detect whether there is a target indoors, but also determine the current state of the target. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle An Ensemble of Condition Based Classifiers for Device Independent Detailed Human Activity Recognition Using Smartphones
Information 2018, 9(4), 94; https://doi.org/10.3390/info9040094
Received: 25 January 2018 / Revised: 11 April 2018 / Accepted: 12 April 2018 / Published: 16 April 2018
PDF Full-text (2896 KB) | HTML Full-text | XML Full-text
Abstract
Human activity recognition is increasingly used for medical, surveillance and entertainment applications. For better monitoring, these applications require identification of detailed activity like sitting on chair/floor, brisk/slow walking, running, etc. This paper proposes a ubiquitous solution to detailed activity recognition through the
[...] Read more.
Human activity recognition is increasingly used for medical, surveillance and entertainment applications. For better monitoring, these applications require identification of detailed activity like sitting on chair/floor, brisk/slow walking, running, etc. This paper proposes a ubiquitous solution to detailed activity recognition through the use of smartphone sensors. Use of smartphones for activity recognition poses challenges such as device independence and various usage behavior in terms of where the smartphone is kept. Only a few works address one or more of these challenges. Consequently, in this paper, we present a detailed activity recognition framework for identifying both static and dynamic activities addressing the above-mentioned challenges. The framework supports cases where (i) dataset contains data from accelerometer; and the (ii) dataset contains data from both accelerometer and gyroscope sensor of smartphones. The framework forms an ensemble of the condition based classifiers to address the variance due to different hardware configuration and usage behavior in terms of where the smartphone is kept (right pants pocket, shirt pockets or right hand). The framework is implemented and tested on real data set collected from 10 users with five different device configurations. It is observed that, with our proposed approach, 94% recognition accuracy can be achieved. Full article
(This article belongs to the Special Issue e-Health Pervasive Wireless Applications and Services (e-HPWAS'17))
Figures

Figure 1

Open AccessArticle Robust Eye Blink Detection Based on Eye Landmarks and Savitzky–Golay Filtering
Information 2018, 9(4), 93; https://doi.org/10.3390/info9040093
Received: 11 March 2018 / Revised: 29 March 2018 / Accepted: 9 April 2018 / Published: 15 April 2018
PDF Full-text (516 KB) | HTML Full-text | XML Full-text
Abstract
A new technique to detect eye blinks is proposed based on automatic tracking of facial landmarks to localise the eyes and eyelid contours. Automatic facial landmarks detectors are trained on an in-the-wild dataset and shows an outstanding robustness to varying lighting conditions, facial
[...] Read more.
A new technique to detect eye blinks is proposed based on automatic tracking of facial landmarks to localise the eyes and eyelid contours. Automatic facial landmarks detectors are trained on an in-the-wild dataset and shows an outstanding robustness to varying lighting conditions, facial expressions, and head orientation. The proposed technique estimates the facial landmark positions and extracts the vertical distance between eyelids for each video frame. Next, a Savitzky–Golay (SG) filter is employed to smooth the obtained signal while keeping the peak information to detect eye blinks. Finally, eye blinks are detected as sharp peaks and a finite state machine is used to check for false blink and true blink cases based on their duration. The efficiency of the proposed technique is shown to outperform the state-of-the-art methods on three standard datasets. Full article
(This article belongs to the Special Issue Selected Papers from ICBRA 2017)
Figures

Figure 1

Open AccessArticle An Architecture to Manage Incoming Traffic of Inter-Domain Routing Using OpenFlow Networks
Information 2018, 9(4), 92; https://doi.org/10.3390/info9040092
Received: 6 February 2018 / Revised: 9 April 2018 / Accepted: 11 April 2018 / Published: 14 April 2018
Cited by 2 | PDF Full-text (1085 KB) | HTML Full-text | XML Full-text
Abstract
The Border Gateway Protocol (BGP) is the current state-of-the-art inter-domain routing between Autonomous Systems (ASes). Although BGP has different mechanisms to manage outbound traffic in an AS domain, it lacks an efficient tool for inbound traffic control from transit ASes such as Internet
[...] Read more.
The Border Gateway Protocol (BGP) is the current state-of-the-art inter-domain routing between Autonomous Systems (ASes). Although BGP has different mechanisms to manage outbound traffic in an AS domain, it lacks an efficient tool for inbound traffic control from transit ASes such as Internet Service Providers (ISPs). For inter-domain routing, the BGP’s destination-based forwarding paradigm limits the granularity of distributing the network traffic among the multiple paths of the current Internet topology. Thus, this work offered a new architecture to manage incoming traffic in the inter-domain using OpenFlow networks. The architecture explored direct inter-domain communication to exchange control information and the functionalities of the OpenFlow protocol. Based on the achieved results of the size of exchanging messages, the proposed architecture is not only scalable, but also capable of performing load balancing for inbound traffic using different strategies. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Substantially Evolutionary Theorizing in Designing Software-Intensive Systems
Information 2018, 9(4), 91; https://doi.org/10.3390/info9040091
Received: 6 February 2018 / Revised: 2 April 2018 / Accepted: 5 April 2018 / Published: 13 April 2018
PDF Full-text (46895 KB) | HTML Full-text | XML Full-text
Abstract
Useful inheritances from scientific experience open perspective ways for increasing the degree of success in designing of systems with software. One such way is a search and build applied theory that takes into account the nature of design and the specificity of software
[...] Read more.
Useful inheritances from scientific experience open perspective ways for increasing the degree of success in designing of systems with software. One such way is a search and build applied theory that takes into account the nature of design and the specificity of software engineering. This paper presents a substantially evolutionary approach to creating the project theories, the application of which leads to positive effects that are traditionally expected from theorizing. Any implementation of the approach is based on a reflection by designers of an operational space of designing onto a semantic memory of a question-answer type. One of the results of such reflection is a system of question-answer nets, the nodes of which register facts of interactions of designers with accessible experience. A set of such facts is used by designers for creating and using the theory that belongs to the new subclass of Grounded Theories. This sub-class is oriented on organizationally behavioral features of a project’s work based on design thinking, automated mental imagination, and thought experimenting that facilitate increasing the degree of controlled intellectualization in the design process and, correspondingly, increasing the degree of success in the development of software-intensive systems. Full article
(This article belongs to the Special Issue Interactive Systems: Problems of Human-Computer Interactions)
Figures

Figure 1

Open AccessArticle A Hybrid Information Mining Approach for Knowledge Discovery in Cardiovascular Disease (CVD)
Information 2018, 9(4), 90; https://doi.org/10.3390/info9040090
Received: 14 March 2018 / Revised: 8 April 2018 / Accepted: 10 April 2018 / Published: 12 April 2018
PDF Full-text (1691 KB) | HTML Full-text | XML Full-text
Abstract
The healthcare ambit is usually perceived as “information rich” yet “knowledge poor”. Nowadays, an unprecedented effort is underway to increase the use of business intelligence techniques to solve this problem. Heart disease (HD) is a major cause of mortality
[...] Read more.
The healthcare ambit is usually perceived as “information rich” yet “knowledge poor”. Nowadays, an unprecedented effort is underway to increase the use of business intelligence techniques to solve this problem. Heart disease (HD) is a major cause of mortality in modern society. This paper analyzes the risk factors that have been identified in cardiovascular disease (CVD) surveillance systems. The Heart Care study identifies attributes related to CVD risk (gender, age, smoking habit, etc.) and other dependent variables that include a specific form of CVD (diabetes, hypertension, cardiac disease, etc.). In this paper, we combine Clustering, Association Rules, and Neural Networks for the assessment of heart-event-related risk factors, targeting the reduction of CVD risk. With the use of the K-means algorithm, significant groups of patients are found. Then, the Apriori algorithm is applied in order to understand the kinds of relations between the attributes within the dataset, first looking within the whole dataset and then refining the results through the subsets defined by the clusters. Finally, both results allow us to better define patients’ characteristics in order to make predictions about CVD risk with a Multilayer Perceptron Neural Network. The results obtained with the hybrid information mining approach indicate that it is an effective strategy for knowledge discovery concerning chronic diseases, particularly for CVD risk. Full article
(This article belongs to the Special Issue Semantics for Big Data Integration)
Figures

Figure 1

Open AccessEditorial Editorial for the Special Issue on “Wireless Energy Harvesting for Future Wireless Communications”
Information 2018, 9(4), 89; https://doi.org/10.3390/info9040089
Received: 10 April 2018 / Revised: 10 April 2018 / Accepted: 10 April 2018 / Published: 12 April 2018
PDF Full-text (134 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Wireless Energy Harvesting for Future Wireless Communications)
Open AccessArticle Hesitant Neutrosophic Linguistic Sets and Their Application in Multiple Attribute Decision Making
Information 2018, 9(4), 88; https://doi.org/10.3390/info9040088
Received: 13 March 2018 / Revised: 28 March 2018 / Accepted: 5 April 2018 / Published: 11 April 2018
PDF Full-text (325 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the hesitant neutrosophic linguistic set is first defined by extending a hesitant fuzzy set to accommodate linguistic terms and neutrosophic fuzzy values. Some operational laws are defined for hesitant neutrosophic linguistic fuzzy information. Several distance measures have been defined including
[...] Read more.
In this paper, the hesitant neutrosophic linguistic set is first defined by extending a hesitant fuzzy set to accommodate linguistic terms and neutrosophic fuzzy values. Some operational laws are defined for hesitant neutrosophic linguistic fuzzy information. Several distance measures have been defined including generalized hesitant neutrosophic linguistic distance, generalized hesitant neutrosophic linguistic Hausdorff distance, and generalized hesitant neutrosophic linguistic hybrid distance. Some hesitant neutrosophic fuzzy linguistic aggregation operators based on the Choquet integral have been defined. A new multiple attribute decision making method for hesitant neutrosophic fuzzy linguistic information has been developed based on TOPSIS. In order to illustrate the feasibility and practical advantages of the new algorithm, we use it to select a company to invest. The new method is then compared with other methods. Full article
Open AccessArticle Auction-Based Cloud Service Pricing and Penalty with Availability on Demand
Information 2018, 9(4), 87; https://doi.org/10.3390/info9040087
Received: 19 March 2018 / Revised: 6 April 2018 / Accepted: 9 April 2018 / Published: 11 April 2018
PDF Full-text (760 KB) | HTML Full-text | XML Full-text
Abstract
Availability is one of the main concerns of cloud users, and cloud providers always try to provide higher availability to improve user satisfaction. However, higher availability results in higher provider costs and lower social welfare. In this paper, taking into account both the
[...] Read more.
Availability is one of the main concerns of cloud users, and cloud providers always try to provide higher availability to improve user satisfaction. However, higher availability results in higher provider costs and lower social welfare. In this paper, taking into account both the users’ valuation and desired availability, we design resource allocation, pricing and penalty mechanisms with availability on demand. Considering two scenarios: public availability in which the desired availabilities of all users are public information, and private availability in which the desired availabilities are private information of users, and, analyzing the possible behaviours of users, we design a truthful deterministic mechanism with 2-approximation in public availability scenario and a universal truthful mechanism with 1 1 + γ approximation in private availability scenario, where γ is the backup ratio of resources with the highest availability. The experiment results show that our mechanisms significantly improve the social welfare compared to the mechanism without considering availability demand of users. Full article
Figures

Figure 1

Open AccessArticle An Improved Two-Way Security Authentication Protocol for RFID System
Information 2018, 9(4), 86; https://doi.org/10.3390/info9040086
Received: 11 March 2018 / Revised: 8 April 2018 / Accepted: 9 April 2018 / Published: 11 April 2018
PDF Full-text (855 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes an improved two-way security authentication protocol to improve the security level of Radio Frequency Identification (RFID) system. In the proposed protocol, tags calculate hash value, which is divided into two parts. The left half is used to verify the identity
[...] Read more.
This paper proposes an improved two-way security authentication protocol to improve the security level of Radio Frequency Identification (RFID) system. In the proposed protocol, tags calculate hash value, which is divided into two parts. The left half is used to verify the identity of the tags, and the right half is used to verify the identity of the reader, which will reduce the tag’s computation and storage. By updating the tag’s secret key value and random number, the protocol can prevent most attacks existing in RFID systems such as data privacy, replay attack, fake attack, position tracking and asynchronous attack. The correctness of the protocol is proved by using Burrows-Abadi-Needham (BAN) logic analysis. The evaluation results show that the scalability of the protocol proposed is achieved with acceptable response time limits. The simulation results indicate that the protocol has significant advantages on performance efficiency for many tags, which provides a reliable approach for RFID system application in practice. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessArticle Location Regularization-Based POI Recommendation in Location-Based Social Networks
Information 2018, 9(4), 85; https://doi.org/10.3390/info9040085
Received: 5 March 2018 / Revised: 1 April 2018 / Accepted: 7 April 2018 / Published: 9 April 2018
Cited by 1 | PDF Full-text (456 KB) | HTML Full-text | XML Full-text
Abstract
POI (point-of-interest) recommendation as one of the efficient information filtering techniques has been widely utilized in helping people find places they are likely to visit, and many related methods have been proposed. Although the methods that exploit geographical information for POI recommendation have
[...] Read more.
POI (point-of-interest) recommendation as one of the efficient information filtering techniques has been widely utilized in helping people find places they are likely to visit, and many related methods have been proposed. Although the methods that exploit geographical information for POI recommendation have been studied, few of these studies have addressed the implicit feedback problem. In fact, in most location-based social networks, the user’s negative preferences are not explicitly observable. Consequently, it is inappropriate to treat POI recommendation as traditional recommendation problem. Moreover, previous studies mainly explore the geographical information from a user perspective and the methods that model them from a location perspective are not well explored. Hence, this work concentrates on exploiting the geographical characteristics from a location perspective for implicit feedback, where a neighborhood aware Bayesian personalized ranking method (NBPR) is proposed. To be specific, the weighted Bayesian framework that was proposed for personalized ranking is first introduced as our basic POI recommendation method. To exploit the geographical characteristics from a location perspective, we then constrain the ranking loss by using a regularization term derived from locations, and assume nearest neighboring POIs are more inclined to be visited by similar users. Finally, several experiments are conducted on two real-world social networks to evaluate the NBPR method, where we can find that our NBPR method has better performance than other related recommendation algorithms. This result also demonstrates the effectiveness of our method with neighborhood information and the importance of the geographical characteristics. Full article
Figures

Figure 1

Open AccessArticle Hybrid Destination-Based Jamming and Opportunistic Scheduling with Optimal Power Allocation to Secure Multiuser Untrusted Relay Networks
Information 2018, 9(4), 84; https://doi.org/10.3390/info9040084
Received: 7 March 2018 / Revised: 1 April 2018 / Accepted: 2 April 2018 / Published: 9 April 2018
PDF Full-text (457 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we investigate secure communication for a dual-hop multiuser relay network, where a source communication with N (N1) destinations via an untrusted variable gains relay. To exploit multiuser diversity while protecting source’s confidential message, we first propose
[...] Read more.
In this paper, we investigate secure communication for a dual-hop multiuser relay network, where a source communication with N ( N 1 ) destinations via an untrusted variable gains relay. To exploit multiuser diversity while protecting source’s confidential message, we first propose a joint destination-based jamming and opportunistic scheduling (DJOS) scheme. Then, we derive closed-form approximated and asymptotic expressions of the secrecy outage probability (SOP) for the considered system with DJOS. Furthermore, we determine an asymptotical optimal power allocation (OPA), which minimizes the asymptotic SOP, to further improve the secrecy performance. Our analytical results show that the achievable secrecy diversity order in terms of SOP with fixed power allocation is min ( 1 , N 2 ) , whereas, with OPA, the achievable secrecy diversity order can be improved up to min ( 1 , 2 N N + 2 ) . This interesting result reveals that OPA can improve the secrecy diversity order of the single-user network. This is intuitive since full diversity order of 1 cannot be achieved when N = 1 , thus leaving some space for OPA to improve the diversity order. Nevertheless, for N 2 , the effect of OPA is to increase the secrecy array gain rather than the secrecy diversity order since full diversity order 1 has been achieved by the OS scheme. Finally, simulation results are presented to validate our analysis. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessFeature PaperArticle Thinking in Patterns and the Pattern of Human Thought as Contrasted with AI Data Processing
Information 2018, 9(4), 83; https://doi.org/10.3390/info9040083
Received: 5 March 2018 / Revised: 4 April 2018 / Accepted: 5 April 2018 / Published: 8 April 2018
Cited by 1 | PDF Full-text (662 KB) | HTML Full-text | XML Full-text
Abstract
We propose that the ability of humans to identify and create patterns led to the unique aspects of human cognition and culture as a complex emergent dynamic system consisting of the following human traits: patterning, social organization beyond that of the nuclear family
[...] Read more.
We propose that the ability of humans to identify and create patterns led to the unique aspects of human cognition and culture as a complex emergent dynamic system consisting of the following human traits: patterning, social organization beyond that of the nuclear family that emerged with the control of fire, rudimentary set theory or categorization and spoken language that co-emerged, the ability to deal with information overload, conceptualization, imagination, abductive reasoning, invention, art, religion, mathematics and science. These traits are interrelated as they all involve the ability to flexibly manipulate information from our environments via pattern restructuring. We argue that the human mind is the emergent product of a shift from external percept-based processing to a concept and language-based form of cognition based on patterning. In this article, we describe the evolution of human cognition and culture, describing the unique patterns of human thought and how we, humans, think in terms of patterns. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Technological Singularity: What Do We Really Know?
Information 2018, 9(4), 82; https://doi.org/10.3390/info9040082
Received: 27 February 2018 / Revised: 25 March 2018 / Accepted: 3 April 2018 / Published: 8 April 2018
Cited by 1 | PDF Full-text (204 KB) | HTML Full-text | XML Full-text
Abstract
The concept of the technological singularity is frequently reified. Futurist forecasts inferred from this imprecise reification are then criticized, and the reified ideas are incorporated in the core concept. In this paper, I try to disentangle the facts related to the technological singularity
[...] Read more.
The concept of the technological singularity is frequently reified. Futurist forecasts inferred from this imprecise reification are then criticized, and the reified ideas are incorporated in the core concept. In this paper, I try to disentangle the facts related to the technological singularity from more speculative beliefs about the possibility of creating artificial general intelligence. I use the theory of metasystem transitions and the concept of universal evolution to analyze some misconceptions about the technological singularity. While it may be neither purely technological, nor truly singular, we can predict that the next transition will take place, and that the emerged metasystem will demonstrate exponential growth in complexity with a doubling time of less than half a year, exceeding the complexity of the existing cybernetic systems in few decades. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle Additive Self-Dual Codes over GF(4) with Minimal Shadow
Information 2018, 9(4), 81; https://doi.org/10.3390/info9040081
Received: 19 March 2018 / Revised: 4 April 2018 / Accepted: 5 April 2018 / Published: 7 April 2018
Cited by 1 | PDF Full-text (772 KB) | HTML Full-text | XML Full-text
Abstract
We define additive self-dual codes over GF(4) with minimal shadow, and we prove the nonexistence of extremal Type I additive self-dual codes over GF(4) with minimal shadow for some parameters.
[...] Read more.
We define additive self-dual codes over G F ( 4 ) with minimal shadow, and we prove the nonexistence of extremal Type I additive self-dual codes over G F ( 4 ) with minimal shadow for some parameters. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle Real-Time Location Systems for Asset Management in Nursing Homes: An Explorative Study of Ethical Aspects
Information 2018, 9(4), 80; https://doi.org/10.3390/info9040080
Received: 14 March 2018 / Revised: 5 April 2018 / Accepted: 5 April 2018 / Published: 7 April 2018
PDF Full-text (7857 KB) | HTML Full-text | XML Full-text
Abstract
Real-time location systems (RTLS) can be implemented in aged care for monitoring persons with wandering behaviour and asset management. RTLS can help retrieve personal items and assistive technologies that when lost or misplaced may have serious financial, economic and practical implications. Various ethical
[...] Read more.
Real-time location systems (RTLS) can be implemented in aged care for monitoring persons with wandering behaviour and asset management. RTLS can help retrieve personal items and assistive technologies that when lost or misplaced may have serious financial, economic and practical implications. Various ethical questions arise during the design and implementation phases of RTLS. This study investigates the perspectives of various stakeholders on ethical questions regarding the use of RTLS for asset management in nursing homes. Three focus group sessions were conducted concerning the needs and wishes of (1) care professionals; (2) residents and their relatives; and (3) researchers and representatives of small and medium-sized enterprises (SMEs). The sessions were transcribed and analysed through a process of open, axial and selective coding. Ethical perspectives concerned the design of the system, the possibilities and functionalities of tracking, monitoring in general and the user-friendliness of the system. In addition, ethical concerns were expressed about security and responsibilities. The ethical perspectives differed per focus group. Aspects of privacy, the benefit of reduced search times, trust, responsibility, security and well-being were raised. The main focus of the carers and residents was on a reduced burden and privacy, whereas the SMEs stressed the potential for improving products and services. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Conceptions of Artificial Intelligence and Singularity
Information 2018, 9(4), 79; https://doi.org/10.3390/info9040079
Received: 15 February 2018 / Revised: 1 April 2018 / Accepted: 3 April 2018 / Published: 6 April 2018
PDF Full-text (785 KB) | HTML Full-text | XML Full-text
Abstract
In the current discussions about “artificial intelligence” (AI) and “singularity”, both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although “artificial general intelligence” (AGI) has become a widely used term
[...] Read more.
In the current discussions about “artificial intelligence” (AI) and “singularity”, both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although “artificial general intelligence” (AGI) has become a widely used term in the related discussions, many people are not really familiar with this research, including its aim and status. We analyze these notions, and introduce the results of our own AGI research. Our main conclusions are that: (1) it is possible to build a computer system that follows the same laws of thought and shows similar properties as the human mind, but, since such an AGI will have neither a human body nor human experience, it will not behave exactly like a human, nor will it be “smarter than a human” on all tasks; and (2) since the development of an AGI requires a reasonably good understanding of the general mechanism of intelligence, the system’s behaviors will still be understandable and predictable in principle. Therefore, the success of AGI will not necessarily lead to a singularity beyond which the future becomes completely incomprehensible and uncontrollable. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Cosmic Evolutionary Philosophy and a Dialectical Approach to Technological Singularity
Information 2018, 9(4), 78; https://doi.org/10.3390/info9040078
Received: 13 February 2018 / Revised: 3 April 2018 / Accepted: 4 April 2018 / Published: 5 April 2018
PDF Full-text (14157 KB) | HTML Full-text | XML Full-text
Abstract
The anticipated next stage of human organization is often described by futurists as a global technological singularity. This next stage of complex organization is hypothesized to be actualized by scientific-technic knowledge networks. However, the general consequences of this process for the meaning of
[...] Read more.
The anticipated next stage of human organization is often described by futurists as a global technological singularity. This next stage of complex organization is hypothesized to be actualized by scientific-technic knowledge networks. However, the general consequences of this process for the meaning of human existence are unknown. Here, it is argued that cosmic evolutionary philosophy is a useful worldview for grounding an understanding of the potential nature of this futures event. In the cosmic evolutionary philosophy, reality is conceptualized locally as a universal dynamic of emergent evolving relations. This universal dynamic is structured by a singular astrophysical origin and an organizational progress from sub-atomic particles to global civilization mediated by qualitative phase transitions. From this theoretical ground, we attempt to understand the next stage of universal dynamics in terms of the motion of general ideation attempting to actualize higher unity. In this way, we approach technological singularity dialectically as an event caused by ideational transformations and mediated by an emergent intersubjective objectivity. From these speculations, a historically-engaged perspective on the nature of human consciousness is articulated where the truth of reality as an emergent unity depends on the collective action of a multiplicity of human observers. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle Tag-Driven Online Novel Recommendation with Collaborative Item Modeling
Information 2018, 9(4), 77; https://doi.org/10.3390/info9040077
Received: 10 February 2018 / Revised: 19 March 2018 / Accepted: 3 April 2018 / Published: 5 April 2018
PDF Full-text (8890 KB) | HTML Full-text | XML Full-text
Abstract
Online novel recommendation recommends attractive novels according to the preferences and characteristics of users or novels and is increasingly touted as an indispensable service of many online stores and websites. The interests of the majority of users remain stable over a certain period.
[...] Read more.
Online novel recommendation recommends attractive novels according to the preferences and characteristics of users or novels and is increasingly touted as an indispensable service of many online stores and websites. The interests of the majority of users remain stable over a certain period. However, there are broad categories in the initial recommendation list achieved by collaborative filtering (CF). That is to say, it is very possible that there are many inappropriately recommended novels. Meanwhile, most algorithms assume that users can provide an explicit preference. However, this assumption does not always hold, especially in online novel reading. To solve these issues, a tag-driven algorithm with collaborative item modeling (TDCIM) is proposed for online novel recommendation. Online novel reading is different from traditional book marketing and lacks preference rating. In addition, collaborative filtering frequently suffers from the Matthew effect, leading to ignored personalized recommendations and serious long tail problems. Therefore, item-based CF is improved by latent preference rating with a punishment mechanism based on novel popularity. Consequently, a tag-driven algorithm is constructed by means of collaborative item modeling and tag extension. Experimental results show that online novel recommendation is improved greatly by a tag-driven algorithm with collaborative item modeling. Full article
(This article belongs to the Special Issue AI for Digital Humanities)
Figures

Figure 1

Open AccessArticle Recognizing Textual Entailment: Challenges in the Portuguese Language
Information 2018, 9(4), 76; https://doi.org/10.3390/info9040076
Received: 28 January 2018 / Revised: 20 March 2018 / Accepted: 26 March 2018 / Published: 29 March 2018
PDF Full-text (494 KB) | HTML Full-text | XML Full-text
Abstract
Recognizing textual entailment comprises the task of determining semantic entailment relations between text fragments. A text fragment entails another text fragment if, from the meaning of the former, one can infer the meaning of the latter. If such relation is bidirectional, then we
[...] Read more.
Recognizing textual entailment comprises the task of determining semantic entailment relations between text fragments. A text fragment entails another text fragment if, from the meaning of the former, one can infer the meaning of the latter. If such relation is bidirectional, then we are in the presence of a paraphrase. Automatically recognizing textual entailment relations captures major semantic inference needs in several natural language processing (NLP) applications. As in many NLP tasks, textual entailment corpora for English abound, while the same is not true for more resource-scarce languages such as Portuguese. Exploiting what seems to be the only Portuguese corpus for textual entailment and paraphrases (the ASSIN corpus), in this paper, we address the task of automatically recognizing textual entailment (RTE) and paraphrases from text written in the Portuguese language, by employing supervised machine learning techniques. We employ lexical, syntactic and semantic features, and analyze the impact of using semantic-based approaches in the performance of the system. We then try to take advantage of the bi-dialect nature of ASSIN to compensate its limited size. With the same aim, we explore modeling the task of recognizing textual entailment and paraphrases as a binary classification problem by considering the bidirectional nature of paraphrases as entailment relationships. Addressing the task as a multi-class classification problem, we achieve results in line with the winner of the ASSIN Challenge. In addition, we conclude that semantic-based approaches are promising in this task, and that combining data from European and Brazilian Portuguese is less straightforward than it may initially seem. The binary classification modeling of the problem does not seem to bring advantages to the original multi-class model, despite the outstanding results obtained by the binary classifier for recognizing textual entailments. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Language-Agnostic Relation Extraction from Abstracts in Wikis
Information 2018, 9(4), 75; https://doi.org/10.3390/info9040075
Received: 5 February 2018 / Revised: 16 March 2018 / Accepted: 28 March 2018 / Published: 29 March 2018
PDF Full-text (1304 KB) | HTML Full-text | XML Full-text
Abstract
Large-scale knowledge graphs, such as DBpedia, Wikidata, or YAGO, can be enhanced by relation extraction from text, using the data in the knowledge graph as training data, i.e., using distant supervision. While most existing approaches use language-specific methods (usually for English), we
[...] Read more.
Large-scale knowledge graphs, such as DBpedia, Wikidata, or YAGO, can be enhanced by relation extraction from text, using the data in the knowledge graph as training data, i.e., using distant supervision. While most existing approaches use language-specific methods (usually for English), we present a language-agnostic approach that exploits background knowledge from the graph instead of language-specific techniques and builds machine learning models only from language-independent features. We demonstrate the extraction of relations from Wikipedia abstracts, using the twelve largest language editions of Wikipedia. From those, we can extract 1.6 M new relations in DBpedia at a level of precision of 95%, using a RandomForest classifier trained only on language-independent features. We furthermore investigate the similarity of models for different languages and show an exemplary geographical breakdown of the information extracted. In a second series of experiments, we show how the approach can be transferred to DBkWik, a knowledge graph extracted from thousands of Wikis. We discuss the challenges and first results of extracting relations from a larger set of Wikis, using a less formalized knowledge graph. Full article
(This article belongs to the Special Issue Towards the Multilingual Web of Data)
Figures

Figure 1

Open AccessArticle Robust Aircraft Detection with a Simple and Efficient Model
Information 2018, 9(4), 74; https://doi.org/10.3390/info9040074
Received: 28 January 2018 / Revised: 20 March 2018 / Accepted: 22 March 2018 / Published: 29 March 2018
Cited by 1 | PDF Full-text (75853 KB) | HTML Full-text | XML Full-text
Abstract
Aircraft detection is the main task of the optoelectronic guiding and monitoring system in airports. In practical applications, we demand not only detection accuracy, but also efficiency. Existing detection approaches always train a set of holistic templates to search over a multi-scale image
[...] Read more.
Aircraft detection is the main task of the optoelectronic guiding and monitoring system in airports. In practical applications, we demand not only detection accuracy, but also efficiency. Existing detection approaches always train a set of holistic templates to search over a multi-scale image space, which is inefficient and costly. Moreover, the holistic templates are sensitive to the occluded or truncated object, although they are trained by many complicated features. To address these problems, we firstly propose a kind of local informative feature which combines a local image patch with its corresponding location. Additionally, for computational reasons, a feature compression method (based on sparse representation and compressive sensing) is proposed to reduce the dimensionality of the feature vector, and which shows excellent performance. Thirdly, to improve the detection accuracy during detection stage, a position estimation algorithm is proposed to calibrate the aircraft’s centroid. From the experimental results, our model achieves favorable detection accuracy, especially for the partially-occluded object. Furthermore, the detection speed is remarkably improved as well. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessFeature PaperArticle Can Social Robots Qualify for Moral Consideration? Reframing the Question about Robot Rights
Information 2018, 9(4), 73; https://doi.org/10.3390/info9040073
Received: 25 February 2018 / Revised: 19 March 2018 / Accepted: 22 March 2018 / Published: 29 March 2018
PDF Full-text (256 KB) | HTML Full-text | XML Full-text
Abstract
A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and
[...] Read more.
A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and unambiguous. For example, those who favor granting rights to robots have not always been clear as to which kinds of robots should (or should not) be eligible; nor have they been consistent with regard to which kinds of rights—civil, legal, moral, etc.—should be granted to qualifying robots. Also, there has been considerable disagreement about which essential criterion, or cluster of criteria, a robot would need to satisfy to be eligible for rights, and there is ongoing disagreement as to whether a robot must satisfy the conditions for (moral) agency to qualify either for rights or (at least some level of) moral consideration. One aim of this paper is to show how the current debate about whether to grant rights to robots would benefit from an analysis and clarification of some key concepts and assumptions underlying that question. My principal objective, however, is to show why we should reframe that question by asking instead whether some kinds of social robots qualify for moral consideration as moral patients. In arguing that the answer to this question is “yes,” I draw from some insights in the writings of Hans Jonas to defend my position. Full article
(This article belongs to the Special Issue ROBOETHICS)
Open AccessArticle Random Linear Network Coding for 5G Mobile Video Delivery
Information 2018, 9(4), 72; https://doi.org/10.3390/info9040072
Received: 14 February 2018 / Revised: 17 March 2018 / Accepted: 27 March 2018 / Published: 28 March 2018
Cited by 1 | PDF Full-text (1069 KB) | HTML Full-text | XML Full-text
Abstract
An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G) 3GPP New Radio (NR) standard will bring a number of new opportunities for optimizing video delivery across both
[...] Read more.
An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G) 3GPP New Radio (NR) standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC). In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research. Full article
(This article belongs to the Special Issue Network and Rateless Coding for Video Streaming)
Figures

Figure 1

Back to Top