Next Issue
Previous Issue

Table of Contents

Information, Volume 9, Issue 10 (October 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-22
Export citation of selected articles as:
Open AccessArticle An Effective Feature Segmentation Algorithm for a Hyper-Spectral Facial Image
Information 2018, 9(10), 261; https://doi.org/10.3390/info9100261
Received: 22 September 2018 / Revised: 15 October 2018 / Accepted: 16 October 2018 / Published: 22 October 2018
Viewed by 286 | PDF Full-text (6804 KB) | HTML Full-text | XML Full-text
Abstract
The human face as a biometric trait has been widely used for personal identity verification but it is still a challenging task under uncontrolled conditions. With the development of hyper-spectral imaging acquisition technology, spectral properties with sufficient discriminative information bring new opportunities for
[...] Read more.
The human face as a biometric trait has been widely used for personal identity verification but it is still a challenging task under uncontrolled conditions. With the development of hyper-spectral imaging acquisition technology, spectral properties with sufficient discriminative information bring new opportunities for a facial image process. This paper presents a novel ensemble method for skin feature segmentation of a hyper-spectral facial image based on a k-means algorithm and a spanning forest algorithm, which exploit both spectral and spatial discriminative features. According to the closed skin area, local features are selected for further facial image analysis. We present the experimental results of the proposed algorithm on various public face databases which achieve higher segmentation rates. Full article
(This article belongs to the Special Issue Machine Learning on Scientific Data and Information)
Figures

Figure 1

Open AccessArticle Additively Consistent Interval-Valued Intuitionistic Fuzzy Preference Relations and Their Application to Group Decision Making
Information 2018, 9(10), 260; https://doi.org/10.3390/info9100260
Received: 18 September 2018 / Revised: 7 October 2018 / Accepted: 12 October 2018 / Published: 21 October 2018
Viewed by 260 | PDF Full-text (932 KB) | HTML Full-text | XML Full-text
Abstract
This paper aims to propose an innovative approach to group decision making (GDM) with interval-valued intuitionistic fuzzy (IVIF) preference relations (IVIFPRs). First, an IVIFPR is proposed based on the additive consistency of an interval-valued fuzzy preference relation (IVFPR). Then, two mathematical or adjusted
[...] Read more.
This paper aims to propose an innovative approach to group decision making (GDM) with interval-valued intuitionistic fuzzy (IVIF) preference relations (IVIFPRs). First, an IVIFPR is proposed based on the additive consistency of an interval-valued fuzzy preference relation (IVFPR). Then, two mathematical or adjusted programming models are established to extract two special consistent IVFPRs. In order to derive the priority weight of an IVIFPR, after taking the two special IVFPRs into consideration, a linear optimization model is constructed by minimizing the deviations between individual judgments and between the width degrees of the interval priority weights. For GDM with IVIFPRs, the decision makers’ weights are generated by combining the adjusted subjective weights with the objective weights. Subsequently, using an IVIF-weighted averaging operator, the collective IVIFPR is obtained and utilized to derive the IVIF priority weights. Finally, a practical example of a supplier selection is analyzed to demonstrate the application of the proposed method. Full article
Figures

Figure 1

Open AccessArticle Where and How to Look for Help Matters: Analysis of Support Exchange in Online Health Communities for People Living with HIV
Information 2018, 9(10), 259; https://doi.org/10.3390/info9100259
Received: 28 September 2018 / Revised: 16 October 2018 / Accepted: 18 October 2018 / Published: 20 October 2018
Viewed by 285 | PDF Full-text (297 KB) | HTML Full-text | XML Full-text
Abstract
Research is scarce on how direct and indirect support seeking strategies affect support exchange in online health communities. Moreover, prior research has relied mostly on content analysis of forum posts at the post level. In order to generate a more fine-grained analysis of
[...] Read more.
Research is scarce on how direct and indirect support seeking strategies affect support exchange in online health communities. Moreover, prior research has relied mostly on content analysis of forum posts at the post level. In order to generate a more fine-grained analysis of support exchange, we conducted content analysis at the utterance level, taking directness of support seeking, quality of provision, forum type, and seeker gender into account. Our analysis of four popular online support forums for people living with human immunodeficiency virus found that type of support sought and provided, support seeking strategy, and quality of emotional support provision differed in care provider/formal forums versus social/informal forums. Interestingly, indirect support seeking tended to elicit more supportive emotional responses than direct support seeking strategies in all forums; we account for this in terms of type of support sought. Practical implications for online support communities are discussed. Full article
Figures

Figure 1

Open AccessArticle Detecting Anomalous Trajectories Using the Dempster-Shafer Evidence Theory Considering Trajectory Features from Taxi GNSS Data
Information 2018, 9(10), 258; https://doi.org/10.3390/info9100258
Received: 4 September 2018 / Revised: 26 September 2018 / Accepted: 11 October 2018 / Published: 20 October 2018
Viewed by 293 | PDF Full-text (6665 KB) | HTML Full-text | XML Full-text
Abstract
In road networks, an ‘optimal’ trajectory is a geometrically optimal drive from the source point to the destination point. In reality, the driver’s driving experience or road traffic conditions will lead to differences between the ‘actual’ trajectory and the ‘optimal’ trajectory. When the
[...] Read more.
In road networks, an ‘optimal’ trajectory is a geometrically optimal drive from the source point to the destination point. In reality, the driver’s driving experience or road traffic conditions will lead to differences between the ‘actual’ trajectory and the ‘optimal’ trajectory. When the differences are excessive, these trajectories are considered as anomalous trajectories. In addition, these differences can be observed in various trajectory features, such as velocity, distance, turns, and intersections. In this paper, our aim is to fuse these trajectory features and to quantitatively describe this difference to infer anomalous trajectories. The Dempster-Shafer (D-S) evidence theory is a theory and method that uses different features as evidence to infer uncertainty. The theory does not require prior knowledge or conditional probabilities. Therefore, we propose an automatic, anomalous trajectory inference method based on the D-S evidence theory that considers driving behavior and road network constraints. To achieve this objective, we first obtain all of the ‘actual’ trajectories of drivers for different source-destination pairs in taxi Global Navigation Satellite System (GNSS) trajectories. Second, we define and extract five trajectory features: route selection ( R S ), intersection rate ( I R ), heading change rate ( HCR ) , slow point rate ( SPR ), and velocity change rate ( VCR ) . Then, different features of each trajectory are combined as evidence according to Dempster’s combinational rule. The precise probability interval of each trajectory is calculated based on the D-S evidence theory. Finally, we obtain the anomalous possibility of all real trajectories and infer anomalous trajectories whose trajectory features are significantly different from normal ones. The experimental results show that the proposed method can infer anomalous trajectories effectively and that it can be used to monitor driver behavior automatically and to discover adverse urban traffic events. Full article
Figures

Figure 1

Open AccessArticle Big Data Analysis to Observe Check-in Behavior Using Location-Based Social Media Data
Information 2018, 9(10), 257; https://doi.org/10.3390/info9100257
Received: 12 September 2018 / Revised: 10 October 2018 / Accepted: 11 October 2018 / Published: 20 October 2018
Viewed by 256 | PDF Full-text (14079 KB) | HTML Full-text | XML Full-text
Abstract
With rapid advancement in location-based services (LBS), their acquisition has become a powerful tool to link people with similar interests across long distances, as well as connecting family and friends. To observe human behavior towards using social media, it is essential to understand
[...] Read more.
With rapid advancement in location-based services (LBS), their acquisition has become a powerful tool to link people with similar interests across long distances, as well as connecting family and friends. To observe human behavior towards using social media, it is essential to understand and measure the check-in behavior towards a location-based social network (LBSN). This check-in phenomenon of sharing location, activities, and time by users has encouraged this research on the frequency of using an LBSN. In this paper, we investigate the check-in behavior of several million individuals, for whom we observe the gender and their frequency of using Chinese microblog Sina Weibo (referred as “Weibo”) over a period in Shanghai, China. To produce a smooth density surface of check-ins, we analyze the overall spatial patterns by using the kernel density estimation (KDE) by using ArcGIS. Furthermore, our results reveal that female users are more inclined towards using social media, and a difference in check-in behavior during weekday and weekend is also observed. From the results, LBSN data seems to be a complement to traditional methods (i.e., survey, census) and is used to study gender-based check-in behavior. Full article
(This article belongs to the Special Issue Information Management in Information Age)
Figures

Figure 1

Open AccessArticle Two New Philosophical Problems for Robo-Ethics
Information 2018, 9(10), 256; https://doi.org/10.3390/info9100256
Received: 27 August 2018 / Revised: 23 September 2018 / Accepted: 16 October 2018 / Published: 18 October 2018
Viewed by 296 | PDF Full-text (223 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this paper is to describe two new philosophical problems for robo-ethics. When one considers the kinds of philosophical problems that arise in the emerging field of robo-ethics, one typically thinks of issues that concern agency, autonomy, rights, consciousness, warfare/military applications,
[...] Read more.
The purpose of this paper is to describe two new philosophical problems for robo-ethics. When one considers the kinds of philosophical problems that arise in the emerging field of robo-ethics, one typically thinks of issues that concern agency, autonomy, rights, consciousness, warfare/military applications, employment and work, the impact for elder-care, and many others. All of these philosophical problems are well known. However, this paper describes two new philosophical problems for robo-ethics that have not been previously addressed in the literature. The author’s view is that if these philosophical problems are not solved, some aspects of robo-ethics research and development will be challenged. Full article
(This article belongs to the Special Issue ROBOETHICS)
Open AccessArticle Sensor Alignment for Ballistic Trajectory Estimation via Sparse Regularization
Information 2018, 9(10), 255; https://doi.org/10.3390/info9100255
Received: 7 September 2018 / Revised: 11 October 2018 / Accepted: 16 October 2018 / Published: 18 October 2018
Viewed by 210 | PDF Full-text (4161 KB) | HTML Full-text | XML Full-text
Abstract
Sensor alignment plays a key role in the accurate estimation of the ballistic trajectory. A sparse regularization-based sensor alignment method coupled with the selection of a regularization parameter is proposed in this paper. The sparse regularization model is established by combining the traditional
[...] Read more.
Sensor alignment plays a key role in the accurate estimation of the ballistic trajectory. A sparse regularization-based sensor alignment method coupled with the selection of a regularization parameter is proposed in this paper. The sparse regularization model is established by combining the traditional model of trajectory estimation with the sparse constraint of systematic errors. The trajectory and the systematic errors are estimated simultaneously by using the Newton algorithm. The regularization parameter in the model is crucial to the accuracy of trajectory estimation. Stein’s unbiased risk estimator (SURE) and general cross-validation (GCV) under the nonlinear measurement model are constructed for determining the suitable regularization parameter. The computation methods of SURE and GCV are also investigated. Simulation results show that both SURE and GCV can provide regularization parameter choices of high quality for minimizing the errors of trajectory estimation, and that our method can improve the accuracy of trajectory estimation over the traditional non-regularization method. The estimates of systematic errors are close to the true value. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Improvement of Fast Simplified Successive-Cancellation Decoder for Polar Codes
Information 2018, 9(10), 254; https://doi.org/10.3390/info9100254
Received: 1 September 2018 / Revised: 15 October 2018 / Accepted: 16 October 2018 / Published: 18 October 2018
Viewed by 292 | PDF Full-text (727 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a new latency reduction method for successive-cancellation (SC) decoding of polar codes that performs a frozen-bit checking on the rate-other (R-other) nodes of the Fast Simplified SC (Fast-SSC) pruning tree. The proposed method integrates the Fast-SSC algorithm and the Improved
[...] Read more.
This paper presents a new latency reduction method for successive-cancellation (SC) decoding of polar codes that performs a frozen-bit checking on the rate-other (R-other) nodes of the Fast Simplified SC (Fast-SSC) pruning tree. The proposed method integrates the Fast-SSC algorithm and the Improved SSC method (frozen-bit checking of the R-other nodes). We apply a recognition-based method to search for as many constituent codes as possible in the decoding tree offline. During decoding, the current node can be decoded directly, if it is a special constituent code; otherwise, the frozen-bit check is executed. If the frozen-bit check condition is satisfied, the operation of the R-other node is the same as that of the rate-one node. In this paper, we prove that the frame error rate (FER) performance of the proposed algorithm is consistent with that of the original SC algorithm. Simulation results show that the proportion of R-other nodes that satisfy the frozen-bit check condition increases with the signal-to-noise-ratio (SNR). Importantly, our proposed method yields a significant reduction in latency compared to those given by existing latency reduction methods. The proposed method solves the problem of high latency for the Improved-SSC method at a high code rate and low SNR, simultaneously. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Group Buying-Based Data Transmission in Flying Ad-Hoc Networks: A Coalition Game Approach
Information 2018, 9(10), 253; https://doi.org/10.3390/info9100253
Received: 13 September 2018 / Revised: 4 October 2018 / Accepted: 11 October 2018 / Published: 15 October 2018
Viewed by 296 | PDF Full-text (889 KB) | HTML Full-text | XML Full-text
Abstract
In scenarios such as natural disasters and military strikes, it is common for unmanned aerial vehicles (UAVs) to form groups to execute reconnaissance and surveillance. To ensure the effectiveness of UAV communications, repeated resource acquisition issues and transmission mechanism designs need to be
[...] Read more.
In scenarios such as natural disasters and military strikes, it is common for unmanned aerial vehicles (UAVs) to form groups to execute reconnaissance and surveillance. To ensure the effectiveness of UAV communications, repeated resource acquisition issues and transmission mechanism designs need to be addressed urgently. Since large-scale UAVs will generate high transmission overhead due to the overlapping resource requirements, in this paper, we propose a resource allocation optimization method based on distributed data content in a Flying Ad-hoc network (FANET). The resource allocation problem with the goal of throughput maximization is constructed as a coalition game framework. Firstly, a data transmission mechanism is designed for UAVs to execute information interaction within the coalitions. Secondly, a novel mechanism of coalition selection based on group-buying is investigated for UAV coalitions to acquire data from the central UAV. The data transmission and coalition selection problem are modeled as coalition graph game and coalition formation game, respectively. Through the design of the utility function, we prove that both games have stable solutions. We also prove the convergence of the proposed approach with coalition order and Pareto order. Based on simulation results, coalition order based coalition selection algorithm (CO-CSA) and Pareto order based coalition selection algorithm (PO-CSA) are proposed to explore the stable coalition partition of system model. CO-CSA and PO-CSA can achieve higher data throughput than the contrast onetime coalition selection algorithm (Onetime-CSA) (at least increased by 34.5% and 16.9%, respectively). Besides, although PO-CSA has relatively lower throughput gain, its convergence times is on average 50.9% less than that of CO-CSA, which means that the algorithm choice is scenario-dependent. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Integration of Context Information through Probabilistic Ontological Knowledge into Image Classification
Information 2018, 9(10), 252; https://doi.org/10.3390/info9100252
Received: 26 August 2018 / Revised: 21 September 2018 / Accepted: 9 October 2018 / Published: 12 October 2018
Viewed by 284 | PDF Full-text (3467 KB) | HTML Full-text | XML Full-text
Abstract
The use of ontological knowledge to improve classification results is a promising line of research. The availability of a probabilistic ontology raises the possibility of combining the probabilities coming from the ontology with the ones produced by a multi-class classifier that detects particular
[...] Read more.
The use of ontological knowledge to improve classification results is a promising line of research. The availability of a probabilistic ontology raises the possibility of combining the probabilities coming from the ontology with the ones produced by a multi-class classifier that detects particular objects in an image. This combination not only provides the relations existing between the different segments, but can also improve the classification accuracy. In fact, it is known that the contextual information can often give information that suggests the correct class. This paper proposes a possible model that implements this integration, and the experimental assessment shows the effectiveness of the integration, especially when the classifier’s accuracy is relatively low. To assess the performance of the proposed model, we designed and implemented a simulated classifier that allows a priori decisions of its performance with sufficient precision. Full article
(This article belongs to the Special Issue Advanced Learning Methods for Complex Data)
Figures

Figure 1

Open AccessArticle What Smart Campuses Can Teach Us about Smart Cities: User Experiences and Open Data
Information 2018, 9(10), 251; https://doi.org/10.3390/info9100251
Received: 2 September 2018 / Revised: 4 October 2018 / Accepted: 6 October 2018 / Published: 12 October 2018
Viewed by 663 | PDF Full-text (1747 KB) | HTML Full-text | XML Full-text
Abstract
Universities, like cities, have embraced novel technologies and data-based solutions to improve their campuses with ‘smart’ becoming a welcomed concept. Campuses in many ways are small-scale cities. They increasingly seek to address similar challenges and to deliver improved experiences to their users. How
[...] Read more.
Universities, like cities, have embraced novel technologies and data-based solutions to improve their campuses with ‘smart’ becoming a welcomed concept. Campuses in many ways are small-scale cities. They increasingly seek to address similar challenges and to deliver improved experiences to their users. How can data be used in making this vision a reality? What can we learn from smart campuses that can be scaled up to smart cities? A short research study was conducted over a three-month period at a public university in the United Kingdom, employing stakeholder interviews and user surveys, which aimed to gain insight into these questions. Based on the study, the authors suggest that making data publicly available could bring many benefits to different groups of stakeholders and campus users. These benefits come with risks and challenges, such as data privacy and protection and infrastructure hurdles. However, if these challenges can be overcome, then open data could contribute significantly to improving campuses and user experiences, and potentially set an example for smart cities. Full article
Figures

Figure 1

Open AccessArticle Robotic Choreography Inspired by the Method of Human Dance Creation
Information 2018, 9(10), 250; https://doi.org/10.3390/info9100250
Received: 1 September 2018 / Revised: 16 September 2018 / Accepted: 9 October 2018 / Published: 10 October 2018
Cited by 1 | Viewed by 292 | PDF Full-text (5073 KB) | HTML Full-text | XML Full-text
Abstract
In general, human dance is created by the imagination and innovativeness of human dancers, which in turn provides an inspiration for robotic choreography generation. This paper proposes a novel mechanism for a humanoid robot to create good choreography autonomously with the imagination of
[...] Read more.
In general, human dance is created by the imagination and innovativeness of human dancers, which in turn provides an inspiration for robotic choreography generation. This paper proposes a novel mechanism for a humanoid robot to create good choreography autonomously with the imagination of human dance. Such a mechanism combines innovativeness with the characteristic preservation of human dance, and enables a humanoid robot to present the characteristics of “imitation, memory, imagination, process and combination”. The proposed mechanism has been implemented on a real humanoid robot, NAO, to verify its feasibility and performance. Experimental results are presented to demonstrate good performance of the proposed mechanism. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessArticle A Theory of Physically Embodied and Causally Effective Agency
Information 2018, 9(10), 249; https://doi.org/10.3390/info9100249
Received: 31 July 2018 / Revised: 18 September 2018 / Accepted: 28 September 2018 / Published: 6 October 2018
Viewed by 321 | PDF Full-text (1189 KB) | HTML Full-text | XML Full-text
Abstract
Causality is fundamental to agency. Intelligent agents learn about causal relationships by interacting with their environments and use their causal knowledge to choose actions intended to bring about desired outcomes. This paper considers a causal question that is central to the very meaning
[...] Read more.
Causality is fundamental to agency. Intelligent agents learn about causal relationships by interacting with their environments and use their causal knowledge to choose actions intended to bring about desired outcomes. This paper considers a causal question that is central to the very meaning of agency, that of how a physically embodied agent effects intentional action in the world. The prevailing assumption is that both biological and computer agents are automatons whose decisions are determined by the physical processes operating in their information processing apparatus. As an alternative hypothesis, this paper presents a mathematical model of causally efficacious agency. The model is based on Stapp’s theory of efficacious choice in physically embodied agents. Stapp’s theory builds on a realistic interpretation of von Neumann’s mathematical formalization of quantum theory. Because it is consistent with the well-established precepts of quantum theory, Stapp’s theory has been dismissed as metaphysical and unfalsifiable. However, if taken seriously as a model of efficacious choice in biological agents, the theory does have empirically testable implications. This paper formulates Stapp’s theory as an interventionist causal theory in which interventions are ascribed to agents and can have macroscopically distinguishable effects in the world. Empirically testable implications of the theory are discussed and a path toward scientific evaluation is proposed. Implications for artificial intelligence are considered. Full article
(This article belongs to the Special Issue Probabilistic Causal Modelling in Intelligent Systems)
Figures

Figure 1

Open AccessArticle Analysis of the Risk Management Process on the Development of the Public Sector Information Technology Master Plan
Information 2018, 9(10), 248; https://doi.org/10.3390/info9100248
Received: 11 September 2018 / Revised: 26 September 2018 / Accepted: 1 October 2018 / Published: 4 October 2018
Viewed by 304 | PDF Full-text (311 KB) | HTML Full-text | XML Full-text
Abstract
The Information and Communication Technology Master Plan—ICTMP—is an important tool for the achievement of the strategic business objectives of public and private organizations. In the public sector, these objectives are closely related to the provision of benefits to society. Information and Communication Technology
[...] Read more.
The Information and Communication Technology Master Plan—ICTMP—is an important tool for the achievement of the strategic business objectives of public and private organizations. In the public sector, these objectives are closely related to the provision of benefits to society. Information and Communication Technology (ICT) actions are present in all organizational processes and involves size-able budgets. The risks inherent in the planning of ICT actions need to be considered for ICT to add value to the business and to maximize the return on investment to the population. In this context, this work intends to examine the use of risk management processes in the development of ICTMPs in the Brazilian public sector. Full article
Figures

Figure 1

Open AccessArticle Dynamic Handwriting Analysis for Supporting Earlier Parkinson’s Disease Diagnosis
Information 2018, 9(10), 247; https://doi.org/10.3390/info9100247
Received: 15 September 2018 / Revised: 25 September 2018 / Accepted: 28 September 2018 / Published: 3 October 2018
Viewed by 427 | PDF Full-text (283 KB) | HTML Full-text | XML Full-text
Abstract
Machine learning techniques are tailored to build intelligent systems to support clinicians at the point of care. In particular, they can complement standard clinical evaluations for the assessment of early signs and manifestations of Parkinson’s disease (PD). Patients suffering from PD typically exhibit
[...] Read more.
Machine learning techniques are tailored to build intelligent systems to support clinicians at the point of care. In particular, they can complement standard clinical evaluations for the assessment of early signs and manifestations of Parkinson’s disease (PD). Patients suffering from PD typically exhibit impairments of previously learned motor skills, such as handwriting. Therefore, handwriting can be considered a powerful marker to develop automatized diagnostic tools. In this paper, we investigated if and to which extent dynamic features of the handwriting process can support PD diagnosis at earlier stages. To this end, a subset of the publicly available PaHaW dataset has been used, including those patients showing only early to mild degree of disease severity. We developed a classification framework based on different classifiers and an ensemble scheme. Some encouraging results have been obtained; in particular, good specificity performances have been observed. This indicates that a handwriting-based decision support tool could be used to administer screening tests useful for ruling in disease. Full article
(This article belongs to the Special Issue eHealth and Artificial Intelligence)
Open AccessArticle A Hybrid PAPR Reduction Method Based on SLM and Multi-Data Block PTS for FBMC/OQAM Systems
Information 2018, 9(10), 246; https://doi.org/10.3390/info9100246
Received: 12 September 2018 / Revised: 26 September 2018 / Accepted: 29 September 2018 / Published: 1 October 2018
Viewed by 292 | PDF Full-text (6029 KB) | HTML Full-text | XML Full-text
Abstract
The filter bank multicarrier employing offset quadrature amplitude modulation (FBMC/OQAM) is a candidate transmission scheme for 5G wireless communication systems. However, it has a high peak-to-average power ratio (PAPR). Due to the nature of overlapped signal structure of FBMC/OQAM, conventional PAPR reduction schemes
[...] Read more.
The filter bank multicarrier employing offset quadrature amplitude modulation (FBMC/OQAM) is a candidate transmission scheme for 5G wireless communication systems. However, it has a high peak-to-average power ratio (PAPR). Due to the nature of overlapped signal structure of FBMC/OQAM, conventional PAPR reduction schemes cannot work effectively. A hybrid PAPR reduction scheme based on selective mapping (SLM) and multi data block partial transmit sequence (M-PTS) methods is proposed for FBMC/OQAM signals in this paper. Different from the simple SLM-PTS method, the proposed hybrid algorithm takes into account the overlapping effect of multiple adjacent data blocks on its PTS process. From simulation results, it can be obtained that the proposed method can offer a significant PAPR reduction performance improvement compared with the SLM, PTS and SLM-PTS methods. The proposed method can effectively reduce the PAPR in FBMC/OQAM systems. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Performance Analysis of Honeypot with Petri Nets
Information 2018, 9(10), 245; https://doi.org/10.3390/info9100245
Received: 11 September 2018 / Revised: 21 September 2018 / Accepted: 26 September 2018 / Published: 30 September 2018
Viewed by 364 | PDF Full-text (647 KB) | HTML Full-text | XML Full-text
Abstract
As one of the active defense technologies, the honeypot deceives the latent intruders to interact with the imitated systems or networks deployed with security mechanisms. Its modeling and performance analysis have not been well studied. In this paper, we propose a honeypot performance
[...] Read more.
As one of the active defense technologies, the honeypot deceives the latent intruders to interact with the imitated systems or networks deployed with security mechanisms. Its modeling and performance analysis have not been well studied. In this paper, we propose a honeypot performance evaluation scheme based on Stochastic Petri Nets (SPN). We firstly set up performance evaluation models for three types of defense scenarios (i.e., firewall; firewall and Intrusion Detection System (IDS); firewall, IDS and honeypot) based on SPN. We then theoretically analyze the SPN models by constructing Markov Chains (MC), which are isomorphic to the models. With the steady state probabilities based on the MC, the system performance evaluation is done with theoretical inference. Finally, we implement the proposed three SPN models on the PIPE platform. Five parameters are applied to compare and evaluate the performance of the proposed SPN models. The analysis of the probability and delay of three scenarios shows that the simulation results validate the effectiveness in security enhancement of the honeypot under the SPN models. Full article
(This article belongs to the Section Information Theory and Methodology)
Figures

Figure 1

Open AccessFeature PaperArticle Countering Superintelligence Misinformation
Information 2018, 9(10), 244; https://doi.org/10.3390/info9100244
Received: 9 September 2018 / Revised: 25 September 2018 / Accepted: 26 September 2018 / Published: 30 September 2018
Viewed by 545 | PDF Full-text (254 KB) | HTML Full-text | XML Full-text
Abstract
Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence
[...] Read more.
Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence is the subject of major ongoing debate, which includes a significant amount of misinformation. Superintelligence misinformation is potentially dangerous, ultimately leading bad decisions by the would-be developers of superintelligence and those who influence them. This paper surveys strategies to counter superintelligence misinformation. Two types of strategies are examined: strategies to prevent the spread of superintelligence misinformation and strategies to correct it after it has spread. In general, misinformation can be difficult to correct, suggesting a high value of strategies to prevent it. This paper is the first extended study of superintelligence misinformation. It draws heavily on the study of misinformation in psychology, political science, and related fields, especially misinformation about global warming. The strategies proposed can be applied to lay public attention to superintelligence, AI education programs, and efforts to build expert consensus. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Open AccessArticle PIF and ReCiF: Efficient Interest-Packet Forwarding Mechanisms for Named-Data Wireless Mesh Networks
Information 2018, 9(10), 243; https://doi.org/10.3390/info9100243
Received: 31 August 2018 / Revised: 20 September 2018 / Accepted: 26 September 2018 / Published: 29 September 2018
Cited by 1 | Viewed by 280 | PDF Full-text (803 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose three mechanisms to reduce the broadcast storm problem in wireless mesh networks based on the Named-Data Network (NDN) architecture. The goal of our mechanisms is to reduce the number of content requests forwarded by nodes and consequently, increase
[...] Read more.
In this paper, we propose three mechanisms to reduce the broadcast storm problem in wireless mesh networks based on the Named-Data Network (NDN) architecture. The goal of our mechanisms is to reduce the number of content requests forwarded by nodes and consequently, increase the network efficiency. The first proposed mechanism, called Probabilistic Interest Forwarding (PIF), randomly forwards content requests. The second mechanism, called Retransmission-Counter-based Forwarding (ReCIF), decides to forward content requests based on the number of retransmissions by adding a counter to the header of requests. The third mechanism, called ReCIF+PIF, combines the features of PIF and ReCIF to suppress content requests. We compare the performance of our mechanisms with both the NDN default forwarding mechanism and the Listen First Broadcast Later (LFBL) mechanism. Our proposals outperform the default NDN forwarding mechanism by up to 21% regarding the data delivery rate in dense networks and provide a 25% lower delivery delay than the default NDN. Our mechanisms accomplish this performance by only reducing the number of content requests forwarded by nodes. One of our mechanisms, PIF, outperforms LFBL regarding the data delivery rate and delivery delay by up to 263% and 55%, respectively, for high network contention levels. Full article
(This article belongs to the Special Issue Information-Centric Networking)
Figures

Figure 1

Open AccessArticle Multi-User Searchable Symmetric Encryption with Dynamic Updates for Cloud Computing
Information 2018, 9(10), 242; https://doi.org/10.3390/info9100242
Received: 2 September 2018 / Revised: 25 September 2018 / Accepted: 26 September 2018 / Published: 28 September 2018
Viewed by 385 | PDF Full-text (2141 KB) | HTML Full-text | XML Full-text
Abstract
With the advent of cloud computing, more and more users begin to outsource encrypted files to cloud servers to provide convenient access and obtain security guarantees. Searchable encryption (SE) allows a user to search the encrypted files without leaking information related to the
[...] Read more.
With the advent of cloud computing, more and more users begin to outsource encrypted files to cloud servers to provide convenient access and obtain security guarantees. Searchable encryption (SE) allows a user to search the encrypted files without leaking information related to the contents of the files. Searchable symmetric encryption (SSE) is an important branch of SE. Most of the existing SSE schemes considered single-user settings, which cannot meet the requirements for data sharing. In this work, we propose a multi-user searchable symmetric encryption scheme with dynamic updates. This scheme is applicable to the usage scenario where one data owner encrypts sensitive files and shares them among multiple users, and it allows secure and efficient searches/updates. We use key distribution and re-encryption to achieve multi-user access while avoiding a series of issues caused by key sharing. Our scheme is constructed based on the index structure where a bit matrix is combined with two static hash tables, pseudorandom functions and hash functions. Our scheme is proven secure in the random oracle model. Full article
Figures

Figure 1

Open AccessArticle Correlation Tracking via Self-Adaptive Fusion of Multiple Features
Information 2018, 9(10), 241; https://doi.org/10.3390/info9100241
Received: 13 August 2018 / Revised: 12 September 2018 / Accepted: 21 September 2018 / Published: 27 September 2018
Viewed by 310 | PDF Full-text (5535 KB) | HTML Full-text | XML Full-text
Abstract
Correlation filter (CF) based tracking algorithms have shown excellent performance in comparison to most state-of-the-art algorithms on the object tracking benchmark (OTB). Nonetheless, most CF based tracking algorithms only consider limited single channel feature, and the tracking model always updated from frame-by-frame. It
[...] Read more.
Correlation filter (CF) based tracking algorithms have shown excellent performance in comparison to most state-of-the-art algorithms on the object tracking benchmark (OTB). Nonetheless, most CF based tracking algorithms only consider limited single channel feature, and the tracking model always updated from frame-by-frame. It will generate some erroneous information when the target objects undergo sophisticated scenario changes, such as background clutter, occlusion, out-of-view, and so forth. Long-term accumulation of erroneous model updating will cause tracking drift. In order to address problems that are mentioned above, in this paper, we propose a robust multi-scale correlation filter tracking algorithm via self-adaptive fusion of multiple features. First, we fuse powerful multiple features including histogram of oriented gradients (HOG), color name (CN), and histogram of local intensities (HI) in the response layer. The weights assigned according to the proportion of response scores that are generated by each feature, which achieve self-adaptive fusion of multiple features for preferable feature representation. In the meantime the efficient model update strategy is proposed, which is performed by exploiting a pre-defined response threshold as discriminative condition for updating tracking model. In addition, we introduce an accurate multi-scale estimation method integrate with the model update strategy, which further improves the scale variation adaptability. Both qualitative and quantitative evaluations on challenging video sequences demonstrate that the proposed tracker performs superiorly against the state-of-the-art CF based methods. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Development of an ANFIS Model for the Optimization of a Queuing System in Warehouses
Information 2018, 9(10), 240; https://doi.org/10.3390/info9100240
Received: 3 September 2018 / Revised: 19 September 2018 / Accepted: 21 September 2018 / Published: 22 September 2018
Cited by 1 | Viewed by 534 | PDF Full-text (4064 KB) | HTML Full-text | XML Full-text
Abstract
Queuing systems (QS) represent everyday life in all business and economic systems. On the one hand, and there is a tendency for their time and cost optimization, but on the other hand, they have not been sufficiently explored. This especially applies to logistics
[...] Read more.
Queuing systems (QS) represent everyday life in all business and economic systems. On the one hand, and there is a tendency for their time and cost optimization, but on the other hand, they have not been sufficiently explored. This especially applies to logistics systems, where a large number of transportation and storage units appear. Therefore, the aim of this paper is to develop an ANFIS (Adaptive neuro-fuzzy inference system) model in a warehouse system with two servers for defining QS optimization parameters. The research was conducted in a company for the manufacturing of brown paper located in the territory of Bosnia and Herzegovina, which represents a significant share of the total export production of the country. In this paper, the optimization criterion is the time spent in the system, which is important both from the aspect of all customers of the system, and from that of the owner of the company. The time criterion directly affects the efficiency of the system, but also the overall costs that this system causes. The developed ANFIS model was compared with a mathematical model through a sensitivity analysis. The mathematical model showed outstanding results, which justifies its development and application. Full article
Figures

Figure 1

Back to Top