Next Issue
Previous Issue

Table of Contents

Information, Volume 9, Issue 6 (June 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-20
Export citation of selected articles as:
Open AccessReview Roboethics: Fundamental Concepts and Future Prospects
Information 2018, 9(6), 148; https://doi.org/10.3390/info9060148
Received: 31 May 2018 / Revised: 13 June 2018 / Accepted: 13 June 2018 / Published: 20 June 2018
PDF Full-text (1887 KB) | HTML Full-text | XML Full-text
Abstract
Many recent studies (e.g., IFR: International Federation of Robotics, 2016) predict that the number of robots (industrial, service/social, intelligent/autonomous) will increase enormously in the future. Robots are directly involved in human life. Industrial robots, household robots, medical robots, assistive robots, sociable/entertainment robots, and
[...] Read more.
Many recent studies (e.g., IFR: International Federation of Robotics, 2016) predict that the number of robots (industrial, service/social, intelligent/autonomous) will increase enormously in the future. Robots are directly involved in human life. Industrial robots, household robots, medical robots, assistive robots, sociable/entertainment robots, and war robots all play important roles in human life and raise crucial ethical problems for our society. The purpose of this paper is to provide an overview of the fundamental concepts of robot ethics (roboethics) and some future prospects of robots and roboethics, as an introduction to the present Special Issue of the journal Information on “Roboethics”. We start with the question of what roboethics is, as well as a discussion of the methodologies of roboethics, including a brief look at the branches and theories of ethics in general. Then, we outline the major branches of roboethics, namely: medical roboethics, assistive roboethics, sociorobot ethics, war roboethics, autonomous car ethics, and cyborg ethics. Finally, we present the prospects for the future of robotics and roboethics. Full article
(This article belongs to the Special Issue ROBOETHICS)
Figures

Figure 1

Open AccessArticle A Semi-Empirical Performance Study of Two-Hop DSRC Message Relaying at Road Intersections
Information 2018, 9(6), 147; https://doi.org/10.3390/info9060147
Received: 20 April 2018 / Revised: 9 June 2018 / Accepted: 17 June 2018 / Published: 18 June 2018
PDF Full-text (379 KB) | HTML Full-text | XML Full-text
Abstract
This paper is focused on a vehicle-to-vehicle (V2V) communication system operating at a road intersection, where the communication links can be either line-of-sight (LOS) or non-line-of-sight (NLOS). We present a semi-empirical analysis of the packet delivery ratio of dedicated short-range communication (DSRC) safety
[...] Read more.
This paper is focused on a vehicle-to-vehicle (V2V) communication system operating at a road intersection, where the communication links can be either line-of-sight (LOS) or non-line-of-sight (NLOS). We present a semi-empirical analysis of the packet delivery ratio of dedicated short-range communication (DSRC) safety messages for both LOS and NLOS scenarios using a commercial transceiver. In a NLOS scenario in which the reception of a safety message may be heavily blocked by concrete buildings, direct communication between the on-board units (OBUs) of vehicles through the IEEE 802.11p standard tends to be unreliable. On the basis of the semi-empirical result of safety message delivery at an intersection, we propose two relaying mechanisms (namely, simple relaying and network-coded relaying) via a road-side unit (RSU) to improve the delivery ratio of safety messages. Specifically, we designed RSU algorithms to optimize the number of relaying messages so as to maximize the message delivery ratio of the entire system in the presence of data packet collisions. Numerical results show that our proposed relaying schemes lead to a significant increase in safety message delivery rates. Full article
Figures

Figure 1

Open AccessArticle Make Flows Great Again: A Hybrid Resilience Mechanism for OpenFlow Networks
Information 2018, 9(6), 146; https://doi.org/10.3390/info9060146
Received: 2 April 2018 / Revised: 29 May 2018 / Accepted: 13 June 2018 / Published: 15 June 2018
PDF Full-text (2346 KB) | HTML Full-text | XML Full-text
Abstract
A top concern in Software-Defined Networking (SDN) is the management of network flows. The resource limitation in SDN devices, e.g., Ternary Content Addressable Memory (TCAM) size, and the signaling overhead between the control and data plane elements can impose scalability restrictions for a
[...] Read more.
A top concern in Software-Defined Networking (SDN) is the management of network flows. The resource limitation in SDN devices, e.g., Ternary Content Addressable Memory (TCAM) size, and the signaling overhead between the control and data plane elements can impose scalability restrictions for a network. A notable SDN technology is the OpenFlow protocol, and failures in links and nodes inside an OpenFlow network could lead to drawbacks, such as packet loss. This work proposes the Local Node Group fast reroute (LONG), a hybrid resilience mechanism for OpenFlow networks that combines protection and restoration resilience mechanisms. The results achieved indicate that LONG is a practical approach when compared against the state-of-the-art algorithms. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Multiple Attributes Group Decision-Making under Interval-Valued Dual Hesitant Fuzzy Unbalanced Linguistic Environment with Prioritized Attributes and Unknown Decision-Makers’ Weights
Information 2018, 9(6), 145; https://doi.org/10.3390/info9060145
Received: 11 May 2018 / Revised: 4 June 2018 / Accepted: 12 June 2018 / Published: 14 June 2018
PDF Full-text (890 KB) | HTML Full-text | XML Full-text
Abstract
Aiming at a special type of ill-defined complicate multiple attributes group decision-making (MAGDM) problem, which exhibits hybrid complexity features of decision hesitancy, prioritized evaluative attributes, and unknown decision-makers’ weights, we investigate an effective approach in this paper. To accommodate decision hesitancy, we employ
[...] Read more.
Aiming at a special type of ill-defined complicate multiple attributes group decision-making (MAGDM) problem, which exhibits hybrid complexity features of decision hesitancy, prioritized evaluative attributes, and unknown decision-makers’ weights, we investigate an effective approach in this paper. To accommodate decision hesitancy, we employ a compound expression tool of interval-valued dual hesitant fuzzy unbalanced linguistic set (IVDHFUBLS) to help decision-makers elicit their assessments more comprehensively and completely. To exploit prioritization relations among evaluating attributes, we develop a prioritized weighted aggregation operator for IVDHFUBLS-based decision-making scenarios and then analyze its properties and special cases. To objectively derive unknown decision-makers’ weighting vector, we next develop a hybrid model that simultaneously takes into account the overall accuracy measure of the individual decision matrix and maximizing deviation among all decision matrices. Furthermore, on the strength of the above methods, we construct an MAGDM approach and demonstrate its practicality and effectiveness using applied study on a green supplier selection problem. Full article
Figures

Figure 1

Open AccessArticle A Semantic Model for Selective Knowledge Discovery over OAI-PMH Structured Resources
Information 2018, 9(6), 144; https://doi.org/10.3390/info9060144
Received: 9 May 2018 / Revised: 31 May 2018 / Accepted: 7 June 2018 / Published: 12 June 2018
PDF Full-text (3681 KB) | HTML Full-text | XML Full-text
Abstract
This work presents OntoOAI, a semantic model for the selective discovery of knowledge about resources structured with the OAI-PMH protocol, to verify the feasibility and account for limitations in the application of technologies of the Semantic Web to data sets for selective knowledge
[...] Read more.
This work presents OntoOAI, a semantic model for the selective discovery of knowledge about resources structured with the OAI-PMH protocol, to verify the feasibility and account for limitations in the application of technologies of the Semantic Web to data sets for selective knowledge discovery, understood as the process of finding resources that were not explicitly requested by a user but are potentially useful based on their context. OntoOAI is tested with a combination of three sources of information: Redalyc.org, the portal of the Network of Journals of Latin America and the Caribbean, Spain, and Portugal; the institutional repository of Roskilde University (called RUDAR); and DBPedia. Its application allows the verification that it is feasible to use semantic technologies to achieve selective knowledge discovery and gives a sample of the limitations of the use of OAI-PMH data for this purpose. Full article
Figures

Figure 1

Open AccessArticle An Extended-Tag-Induced Matrix Factorization Technique for Recommender Systems
Information 2018, 9(6), 143; https://doi.org/10.3390/info9060143
Received: 24 April 2018 / Revised: 3 June 2018 / Accepted: 8 June 2018 / Published: 11 June 2018
PDF Full-text (1665 KB) | HTML Full-text | XML Full-text
Abstract
Social tag information has been used by recommender systems to handle the problem of data sparsity. Recently, the relationships between users/items and tags are considered by most tag-induced recommendation methods. However, sparse tag information is challenging to most existing methods. In this paper,
[...] Read more.
Social tag information has been used by recommender systems to handle the problem of data sparsity. Recently, the relationships between users/items and tags are considered by most tag-induced recommendation methods. However, sparse tag information is challenging to most existing methods. In this paper, we propose an Extended-Tag-Induced Matrix Factorization technique for recommender systems, which exploits correlations among tags derived by co-occurrence of tags to improve the performance of recommender systems, even in the case of sparse tag information. The proposed method integrates coupled similarity between tags, which is calculated by the co-occurrences of tags in the same items, to extend each item’s tags. Finally, item similarity based on extended tags is utilized as an item relationship regularization term to constrain the process of matrix factorization. MovieLens dataset and Book-Crossing dataset are adopted to evaluate the performance of the proposed algorithm. The results of experiments show that the proposed method can alleviate the impact of tag sparsity and improve the performance of recommender systems. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessArticle Pythagorean Fuzzy Muirhead Mean Operators and Their Application in Multiple-Criteria Group Decision-Making
Information 2018, 9(6), 142; https://doi.org/10.3390/info9060142
Received: 25 May 2018 / Revised: 5 June 2018 / Accepted: 7 June 2018 / Published: 11 June 2018
PDF Full-text (1063 KB) | HTML Full-text | XML Full-text
Abstract
As a generalization of the intuitionistic fuzzy set (IFS), a Pythagorean fuzzy set has more flexibility than IFS in expressing uncertainty and fuzziness in the process of multiple criteria group decision-making (MCGDM). Meanwhile, the prominent advantage of the Muirhead mean (MM) operator is
[...] Read more.
As a generalization of the intuitionistic fuzzy set (IFS), a Pythagorean fuzzy set has more flexibility than IFS in expressing uncertainty and fuzziness in the process of multiple criteria group decision-making (MCGDM). Meanwhile, the prominent advantage of the Muirhead mean (MM) operator is that it can reflect the relationships among the various input arguments through changing a parameter vector. Motivated by these primary characters, in this study, we introduced the MM operator into the Pythagorean fuzzy context to expand its applied fields. To do so, we presented the Pythagorean fuzzy MM (PFMM) operators and Pythagorean fuzzy dual MM (PFDMM) operator to fuse the Pythagorean fuzzy information. Then, we investigated their some properties and gave some special cases related to the parameter vector. In addition, based on the developed operators, two MCGDM methods under the Pythagorean fuzzy environment are proposed. An example is given to verify the validity and feasibility of our proposed methods, and a comparative analysis is provided to show their advantages. Full article
Figures

Figure 1

Open AccessArticle Best Practices Kits for the ICT Governance Process within the Secretariat of State-Owned Companies of Brazil and Regarding these Public Companies
Information 2018, 9(6), 141; https://doi.org/10.3390/info9060141
Received: 17 April 2018 / Revised: 10 May 2018 / Accepted: 5 June 2018 / Published: 9 June 2018
PDF Full-text (727 KB) | HTML Full-text | XML Full-text
Abstract
This article introduces an Information and Communication Technology Governance Kit to be used by the Brazilian Secretariat of Coordination and Governance of State Enterprises (in Portuguese, Secretaria de Coordenação e Governança das Empresas Estatais—SEST) in regards to its governed companies. The proposed kit
[...] Read more.
This article introduces an Information and Communication Technology Governance Kit to be used by the Brazilian Secretariat of Coordination and Governance of State Enterprises (in Portuguese, Secretaria de Coordenação e Governança das Empresas Estatais—SEST) in regards to its governed companies. The proposed kit is an instrument of targeting, which presents a set of best practices and conditioning aimed at the development and implementation of improvements in the management of ICT resources by Brazilian state-owned companies that are controlled by SEST. The proposed kit comprises three situation scenarios and four maturity levels. For each proposed process, the artifacts and templates are presented for the controlled companies to implement their respective processes so that these companies guide the improvements in their ICT governance maturity level. Considering that SEST is the principal entity in this governance structure, the main contribution of the proposed kits is to facilitate, guide and improve the maturity level of all Brazilian state-owned enterprises. Full article
Figures

Figure 1

Open AccessArticle Target Tracking Algorithm Based on an Adaptive Feature and Particle Filter
Information 2018, 9(6), 140; https://doi.org/10.3390/info9060140
Received: 10 May 2018 / Revised: 6 June 2018 / Accepted: 7 June 2018 / Published: 8 June 2018
PDF Full-text (3132 KB) | HTML Full-text | XML Full-text
Abstract
To boost the robustness of the traditional particle-filter-based tracking algorithm under complex scenes and to tackle the drift problem that is caused by the fast moving target, an improved particle-filter-based tracking algorithm is proposed. Firstly, all of the particles are divided into two
[...] Read more.
To boost the robustness of the traditional particle-filter-based tracking algorithm under complex scenes and to tackle the drift problem that is caused by the fast moving target, an improved particle-filter-based tracking algorithm is proposed. Firstly, all of the particles are divided into two parts and put separately. The number of particles that are put for the first time is large enough to ensure that the number of the particles that can cover the target is as many as possible, and then the second part of the particles are put at the location of the particle with the highest similarity to the template in the particles that are first put, to improve the tracking accuracy. Secondly, in order to obtain a sparser solution, a novel minimization model for an Lp tracker is proposed. Finally, an adaptive multi-feature fusion strategy is proposed, to deal with more complex scenes. The experimental results demonstrate that the proposed algorithm can not only improve the tracking robustness, but can also enhance the tracking accuracy in the case of complex scenes. In addition, our tracker can get better accuracy and robustness than several state-of-the-art trackers. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Multiple Congestion Points and Congestion Reaction Mechanisms for Improving DCTCP Performance in Data Center Networks
Information 2018, 9(6), 139; https://doi.org/10.3390/info9060139
Received: 23 February 2018 / Revised: 22 May 2018 / Accepted: 6 June 2018 / Published: 8 June 2018
PDF Full-text (1677 KB) | HTML Full-text | XML Full-text
Abstract
For addressing problems such as long delays, latency fluctuations, and frequent timeouts in conventional Transmission Control Protocol (TCP) in a data center environment, Data Center TCP (DCTCP) has been proposed as a TCP replacement to satisfy the requirements of data center networks. It
[...] Read more.
For addressing problems such as long delays, latency fluctuations, and frequent timeouts in conventional Transmission Control Protocol (TCP) in a data center environment, Data Center TCP (DCTCP) has been proposed as a TCP replacement to satisfy the requirements of data center networks. It is gaining more popularity in academic as well as industry areas due to its performance in terms of high throughput and low latency, and is widely deployed in data centers. However, according to the recent research about the performance of DCTCP, authors have found that most times the sender’s congestion window reduces to one segment, which results in timeouts. In addition, the authors observed that the nonlinear marking mechanism of DCTCP causes severe queue oscillation, which results in low throughput. To address the above issues of DCTCP, we propose multiple congestion points using double threshold and congestion reaction using window adjustment (DT-CWA) mechanisms for improving the performance of DCTCP by reducing the number of timeouts. The results of a series of simulations in a typical data center network topology using Qualnet network simulator, the most widely used network simulator, demonstrate that the proposed window-based solution can significantly reduce the timeouts and noticeably improves the throughput compared to DCTCP under various network conditions. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle Automatically Specifying a Parallel Composition of Matchers in Ontology Matching Process by Using Genetic Algorithm
Information 2018, 9(6), 138; https://doi.org/10.3390/info9060138
Received: 25 April 2018 / Revised: 5 June 2018 / Accepted: 6 June 2018 / Published: 7 June 2018
PDF Full-text (2935 KB) | HTML Full-text | XML Full-text
Abstract
Today, there is a rapid increase of the available data because of advances in information and communications technology. Therefore, many mutually heterogeneous data sources that describe the same domain of interest exist. To facilitate the integration of these heterogeneous data sources, an ontology
[...] Read more.
Today, there is a rapid increase of the available data because of advances in information and communications technology. Therefore, many mutually heterogeneous data sources that describe the same domain of interest exist. To facilitate the integration of these heterogeneous data sources, an ontology can be used as it enriches the knowledge of a data source by giving a detailed description of entities and their mutual relations within the domain of interest. Ontology matching is a key issue in integrating heterogeneous data sources described by ontologies as it eases the management of data coming from various sources. The ontology matching system consists of several basic matchers. To determine high-quality correspondences between entities of compared ontologies, the matching results of these basic matchers should be aggregated by an aggregation method. In this paper, a new weighted aggregation method for parallel composition of basic matchers based on genetic algorithm is presented. The evaluation has confirmed a high quality of the new aggregation method as this method has improved the process of matching two ontologies by obtaining higher confidence values of correctly found correspondences and thus increasing the quality of matching results. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessArticle A Novel Method for Determining the Attribute Weights in the Multiple Attribute Decision-Making with Neutrosophic Information through Maximizing the Generalized Single-Valued Neutrosophic Deviation
Information 2018, 9(6), 137; https://doi.org/10.3390/info9060137
Received: 19 April 2018 / Revised: 28 May 2018 / Accepted: 1 June 2018 / Published: 7 June 2018
PDF Full-text (289 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this paper is to investigate the weights determination in the multiple attribute decision-making (MADM) with the single valued neutrosophic information. We first introduce a generalized single-valued neutrosophic deviation measure for a group of single valued neutrosophic sets (SVNSs), and then
[...] Read more.
The purpose of this paper is to investigate the weights determination in the multiple attribute decision-making (MADM) with the single valued neutrosophic information. We first introduce a generalized single-valued neutrosophic deviation measure for a group of single valued neutrosophic sets (SVNSs), and then present a novel and simple nonlinear optimization model to determine the attribute weights by maximizing the total deviation of all attribute values, whether the attribute weights are partly known or completely unknown. Compared with the existing method based on the deviation measure, the presented approach does not normalize the optimal solution and is easier to integrate the subjective and objective information about attribute weights in the neutrosophic MADM problems. Moreover, the proposed nonlinear optimization model is solved to obtain an exact and straightforward formula for determining the attribute weights if the attribute weights are completely unknown. After the weights are obtained, the neutrosophic information of each alternative is aggregated by using the single valued neutrosophic weighted average (SVNWA) operator. In what follows, all alternatives are ranked and the most preferred one(s) is easily selected according to the score function and accuracy function. Finally, an example in literature is examined to verify the effectiveness and application of the developed approach. The example is also used to demonstrate the rationality for overcoming some drawbacks of the existing approach according to the maximizing deviation method. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle Impact of Reciprocity in Information Spreading Using Epidemic Model Variants
Information 2018, 9(6), 136; https://doi.org/10.3390/info9060136
Received: 25 April 2018 / Revised: 31 May 2018 / Accepted: 1 June 2018 / Published: 5 June 2018
PDF Full-text (3138 KB) | HTML Full-text | XML Full-text
Abstract
The use of online social networks has become a standard medium of social interactions and information spreading. Due to the significant amount of data available online, social network analysis has become apropos to the researchers of diverse domains to study and analyse innovative
[...] Read more.
The use of online social networks has become a standard medium of social interactions and information spreading. Due to the significant amount of data available online, social network analysis has become apropos to the researchers of diverse domains to study and analyse innovative patterns, friendships, and relationships. Message dissemination through these networks is a complex and dynamic process. Moreover, the presence of reciprocal links intensify the whole process of propagation and expand the chances of reaching to the target node. We therefore empirically investigated the relative importance of reciprocal relationships in the directed social networks affecting information spreading. Since the dynamics of the information diffusion has considerable qualitative similarities with the spread of infections, we analysed six different variants of the Susceptible–Infected (SI) epidemic spreading model to evaluate the effect of reciprocity. By analysing three different directed networks on different network metrics using these variants, we establish the dominance of reciprocal links as compared to the non-reciprocal links. This study also contributes towards a closer examination of the subtleties responsible for maintaining the network connectivity. Full article
Figures

Figure 1

Open AccessArticle Going beyond the “T” in “CTC”: Social Practices as Care in Community Technology Centers
Information 2018, 9(6), 135; https://doi.org/10.3390/info9060135
Received: 15 March 2018 / Revised: 15 May 2018 / Accepted: 22 May 2018 / Published: 3 June 2018
PDF Full-text (250 KB) | HTML Full-text | XML Full-text
Abstract
Community technology center (CTC) is a term usually associated with facilities that provide free or affordable computer and internet access, and sometimes training, to people in underserved communities. Despite the large number of studies done on CTCs, the literature has focused primarily on
[...] Read more.
Community technology center (CTC) is a term usually associated with facilities that provide free or affordable computer and internet access, and sometimes training, to people in underserved communities. Despite the large number of studies done on CTCs, the literature has focused primarily on the use of ICTs as the main, if not the only, activity in these centers. When it comes to addressing social concerns, the literature has often seen them as an outcome of ICT use. It does not highlight CTCs as an inherent and important social space that helps to tackle social issues. Thus, in this study, I present an ethnographic account of how residents of favelas (urban slums in Brazil)—who are from understudied and marginalized areas—used these centers beyond the “T” (technology) in order to fulfill some of their social needs. I highlight the social practices afforded by the CTCs that were beneficial to the underserved communities. By social practices, I focus exclusively on the acts of care performed by individuals in order to address self and community needs. I argue that CTCs go beyond the use of technology and provide marginalized people with a key social space, where they alleviate some of their social concerns, such as lack of proper education, violence, drug cartel activities, and other implications of being poor. Full article
Open AccessArticle High Performance Methods for Linked Open Data Connectivity Analytics
Information 2018, 9(6), 134; https://doi.org/10.3390/info9060134
Received: 9 May 2018 / Revised: 29 May 2018 / Accepted: 29 May 2018 / Published: 3 June 2018
PDF Full-text (1604 KB) | HTML Full-text | XML Full-text
Abstract
The main objective of Linked Data is linking and integration, and a major step for evaluating whether this target has been reached, is to find all the connections among the Linked Open Data (LOD) Cloud datasets. Connectivity among two or more datasets can
[...] Read more.
The main objective of Linked Data is linking and integration, and a major step for evaluating whether this target has been reached, is to find all the connections among the Linked Open Data (LOD) Cloud datasets. Connectivity among two or more datasets can be achieved through common Entities, Triples, Literals, and Schema Elements, while more connections can occur due to equivalence relationships between URIs, such as owl:sameAs, owl:equivalentProperty and owl:equivalentClass, since many publishers use such equivalence relationships, for declaring that their URIs are equivalent with URIs of other datasets. However, there are not available connectivity measurements (and indexes) involving more than two datasets, that cover the whole content (e.g., entities, schema, triples) or “slices” (e.g., triples for a specific entity) of datasets, although they can be of primary importance for several real world tasks, such as Information Enrichment, Dataset Discovery and others. Generally, it is not an easy task to find the connections among the datasets, since there exists a big number of LOD datasets and the transitive and symmetric closure of equivalence relationships should be computed for not missing connections. For this reason, we introduce scalable methods and algorithms, (a) for performing the computation of transitive and symmetric closure for equivalence relationships (since they can produce more connections between the datasets); (b) for constructing dedicated global semantics-aware indexes that cover the whole content of datasets; and (c) for measuring the connectivity among two or more datasets. Finally, we evaluate the speedup of the proposed approach, while we report comparative results for over two billion triples. Full article
(This article belongs to the Special Issue Semantics for Big Data Integration)
Figures

Figure 1

Open AccessArticle A Machine Learning Filter for the Slot Filling Task
Information 2018, 9(6), 133; https://doi.org/10.3390/info9060133
Received: 13 April 2018 / Revised: 25 May 2018 / Accepted: 25 May 2018 / Published: 30 May 2018
PDF Full-text (707 KB) | HTML Full-text | XML Full-text
Abstract
Slot Filling, a subtask of Relation Extraction, represents a key aspect for building structured knowledge bases usable for semantic-based information retrieval. In this work, we present a machine learning filter whose aim is to enhance the precision of relation extractors while minimizing the
[...] Read more.
Slot Filling, a subtask of Relation Extraction, represents a key aspect for building structured knowledge bases usable for semantic-based information retrieval. In this work, we present a machine learning filter whose aim is to enhance the precision of relation extractors while minimizing the impact on the recall. Our approach consists in the filtering of relation extractors’ output using a binary classifier. This classifier is based on a wide array of features including syntactic, semantic and statistical features such as the most frequent part-of-speech patterns or the syntactic dependencies between entities. We experimented the classifier on the 18 participating systems in the TAC KBP 2013 English Slot Filling track. The TAC KBP English Slot Filling track is an evaluation campaign that targets the extraction of 41 pre-identified relations (e.g., title, date of birth, countries of residence, etc.) related to specific named entities (persons and organizations). Our results show that the classifier is able to improve the global precision of the best 2013 system by 20.5% and improve the F1-score for 20 relations out of 33 considered. Full article
(This article belongs to the Section Artificial Intelligence)
Figures

Figure 1

Open AccessFeature PaperArticle An Agent-Based Approach to Interbank Market Lending Decisions and Risk Implications
Information 2018, 9(6), 132; https://doi.org/10.3390/info9060132
Received: 1 April 2018 / Revised: 16 May 2018 / Accepted: 26 May 2018 / Published: 29 May 2018
PDF Full-text (876 KB) | HTML Full-text | XML Full-text
Abstract
In this study, we examine the relationship of bank level lending and borrowing decisions and the risk preferences on the dynamics of the interbank lending market. We develop an agent-based model that incorporates individual bank decisions using the temporal difference reinforcement learning algorithm
[...] Read more.
In this study, we examine the relationship of bank level lending and borrowing decisions and the risk preferences on the dynamics of the interbank lending market. We develop an agent-based model that incorporates individual bank decisions using the temporal difference reinforcement learning algorithm with empirical data of 6600 U.S. banks. The model can successfully replicate the key characteristics of interbank lending and borrowing relationships documented in the recent literature. A key finding of this study is that risk preferences at the individual bank level can lead to unique interbank market structures that are suggestive of the capacity with which the market responds to surprising shocks. Full article
(This article belongs to the Special Issue Agent-Based Artificial Markets)
Figures

Figure 1

Open AccessArticle Hadoop Cluster Deployment: A Methodological Approach
Information 2018, 9(6), 131; https://doi.org/10.3390/info9060131
Received: 27 February 2018 / Revised: 24 May 2018 / Accepted: 25 May 2018 / Published: 29 May 2018
PDF Full-text (3106 KB) | HTML Full-text | XML Full-text
Abstract
For a long time, data has been treated as a general problem because it just represents fractions of an event without any relevant purpose. However, the last decade has been just about information and how to get it. Seeking meaning in data and
[...] Read more.
For a long time, data has been treated as a general problem because it just represents fractions of an event without any relevant purpose. However, the last decade has been just about information and how to get it. Seeking meaning in data and trying to solve scalability problems, many frameworks have been developed to improve data storage and its analysis. As a framework, Hadoop was presented as a powerful tool to deal with large amounts of data. However, it still causes doubts about how to deal with its deployment and if there is any reliable method to compare the performance of distinct Hadoop clusters. This paper presents a methodology based on benchmark analysis to guide the Hadoop cluster deployment. The experiments employed The Apache Hadoop and the Hadoop distributions of Cloudera, Hortonworks, and MapR, analyzing the architectures on local and on clouding—using centralized and geographically distributed servers. The results show the methodology can be dynamically applied on a reliable comparison among different architectures. Additionally, the study suggests that the knowledge acquired can be used to improve the data analysis process by understanding the Hadoop architecture. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle An Interactive Multiobjective Optimization Approach to Supplier Selection and Order Allocation Problems Using the Concept of Desirability
Information 2018, 9(6), 130; https://doi.org/10.3390/info9060130
Received: 9 May 2018 / Revised: 17 May 2018 / Accepted: 22 May 2018 / Published: 23 May 2018
PDF Full-text (1351 KB) | HTML Full-text | XML Full-text
Abstract
In supply chain management, selecting the right supplier is one of the most important decision-making processes for improving corporate competitiveness. In particular, when a buyer considers selecting multiple suppliers, one should consider the issue of order allocation with supplier selection. In this article,
[...] Read more.
In supply chain management, selecting the right supplier is one of the most important decision-making processes for improving corporate competitiveness. In particular, when a buyer considers selecting multiple suppliers, one should consider the issue of order allocation with supplier selection. In this article, an interactive multiobjective optimization approach is proposed for the supplier selection and order allocation problem. Also, the concept of desirability is incorporated into the optimization model to take into account the principles of diminishing marginal utility. The results are presented by comparing them with the solutions from the weighting methods. This study shows the advantage of the proposed method in that the decision-maker directly checks the degree of desirability and learns his/her preference structure through improved solutions. Full article
Figures

Figure 1

Open AccessArticle Hybrid Visualization Approach to Show Documents Similarity and Content in a Single View
Information 2018, 9(6), 129; https://doi.org/10.3390/info9060129
Received: 27 February 2018 / Revised: 16 May 2018 / Accepted: 17 May 2018 / Published: 23 May 2018
PDF Full-text (5858 KB) | HTML Full-text | XML Full-text
Abstract
Multidimensional projection techniques can be employed to project datasets from a higher to a lower dimensional space (e.g., 2D space). These techniques can be used to present the relationships of dataset instances based on distance by grouping or separating clusters of instances in
[...] Read more.
Multidimensional projection techniques can be employed to project datasets from a higher to a lower dimensional space (e.g., 2D space). These techniques can be used to present the relationships of dataset instances based on distance by grouping or separating clusters of instances in the projected space. Several works have used multidimensional projections to aid in the exploration of document collections. Even though the projection techniques can organize a dataset, the user needs to read each document to understand the cluster generation. Alternatively, techniques such as topic extraction or tag clouds can be employed to present a summary of the document contents. To minimize the exploratory work and to aid in cluster analysis, this work proposes a new hybrid visualization to show both document relationship and content in a single view, employing multidimensional projections to relate documents and tag clouds. We show the effectiveness of the proposed approach in the exploration of two document collections composed by world news. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Back to Top