Previous Issue

Table of Contents

Information, Volume 9, Issue 12 (December 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-33
Export citation of selected articles as:
Open AccessArticle Exploring How Homophily and Accessibility Can Facilitate Polarization in Social Networks
Information 2018, 9(12), 325; https://doi.org/10.3390/info9120325 (registering DOI)
Received: 30 October 2018 / Revised: 1 December 2018 / Accepted: 11 December 2018 / Published: 14 December 2018
PDF Full-text (1465 KB) | HTML Full-text | XML Full-text
Abstract
Polarization in online social networks has gathered a significant amount of attention in the research community and in the public sphere due to stark disagreements with millions of participants on topics surrounding politics, climate, the economy and other areas where an agreement is
[...] Read more.
Polarization in online social networks has gathered a significant amount of attention in the research community and in the public sphere due to stark disagreements with millions of participants on topics surrounding politics, climate, the economy and other areas where an agreement is required. This work investigates into greater depth a type of model that can produce ideological segregation as a result of polarization depending on the strength of homophily and the ability of users to access similar minded individuals. Whether increased access can induce larger amounts of societal separation is important to investigate, and this work sheds further insight into the phenomenon. Center to the hypothesis of homophilic alignments in friendship generation is that of a discussion group or community. These are modeled and the investigation into their effect on the dynamics of polarization is presented. The social implications demonstrate that initial phases of an ideological exchange can result in increased polarization, although a consensus in the long run is expected and that the separation between groups is amplified when groups are constructed with ideological homophilic preferences. Full article
(This article belongs to the Special Issue Information Diffusion in Social Networks)
Figures

Figure 1

Open AccessArticle Gamified Software to Support the Design of Business Innovation
Information 2018, 9(12), 324; https://doi.org/10.3390/info9120324 (registering DOI)
Received: 30 September 2018 / Revised: 10 November 2018 / Accepted: 12 December 2018 / Published: 14 December 2018
PDF Full-text (1299 KB) | HTML Full-text | XML Full-text
Abstract
Business innovation is a process that requires creativity, and benefits from extensive collaboration. Currently, computational support in creativity processes is low, but modern techniques would allow these processes to be sped up. In this context, we provide such a computational support with software
[...] Read more.
Business innovation is a process that requires creativity, and benefits from extensive collaboration. Currently, computational support in creativity processes is low, but modern techniques would allow these processes to be sped up. In this context, we provide such a computational support with software for business innovation design that uses computational creativity techniques. Furthermore, the software enables a gamified process to increase user engagement and collaboration, which mimics evolutionary methods, relying on a voting mechanism. The software includes a business innovation ontology representing the domain knowledge that is used to generate and select a set of diverse preliminary representations of business ideas. Indeed, the most promising for novelty and potential impact are identified to ignite a business innovation game where team members collaborate to elaborate new innovation ideas based on those inputs until convergence to a shortlist of business model proposals. The main features of the approach are illustrated by means of a running example concerning innovative services for smart cities. Full article
Figures

Figure 1

Open AccessArticle Tradeoff Analysis between Spectral and Energy Efficiency Based on Sub-Channel Activity Index in Wireless Cognitive Radio Networks
Information 2018, 9(12), 323; https://doi.org/10.3390/info9120323 (registering DOI)
Received: 17 October 2018 / Revised: 1 December 2018 / Accepted: 4 December 2018 / Published: 14 December 2018
PDF Full-text (2213 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, there has been a rapid evolution of wireless technologies that has led to the challenge of high demand for spectral resources. To overcome this challenge, good spectrum management is required that calls for more efficient use of the spectrum. In
[...] Read more.
In recent years, there has been a rapid evolution of wireless technologies that has led to the challenge of high demand for spectral resources. To overcome this challenge, good spectrum management is required that calls for more efficient use of the spectrum. In this paper, we present a general system, which makes a tradeoff between the spectral efficiency (SE) and energy efficiency (EE) in the cellular cognitive radio networks (CCRN) with their respective limits. We have analyzed the system taking into account the different types of power used in the CCRN, namely the spectrum detection power (Zs) and the relay power (Zr). Optimal policy for emission power allocation formulated in the function of sub-channel activity index (SAI) as an optimization problem in order to maximize spectrum utilization and minimize the energy consumption in the base station of the secondary system energy consumption, is subject to different constraints of the main user system. We also evaluate the collaborative activity index of the sub-channel describing the activity of the primary users in the CCRN. The theoretical analyses and simulation results sufficiently demonstrate that the SE and EE relationship in the CCRN is not contrary and thus the achievement of optimal tradeoff between SE and EE. By making a rapprochement with a cognitive cellular network where SBSs adopts an equal power allocation strategy for sub-channels, the results of our proposed scheme indicate a significant improvement. Therefore, the model proposed in this paper offers a better tradeoff between SE and EE. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Improved Joint Probabilistic Data Association (JPDA) Filter Using Motion Feature for Multiple Maneuvering Targets in Uncertain Tracking Situations
Information 2018, 9(12), 322; https://doi.org/10.3390/info9120322
Received: 22 November 2018 / Revised: 8 December 2018 / Accepted: 8 December 2018 / Published: 13 December 2018
PDF Full-text (4663 KB) | HTML Full-text | XML Full-text
Abstract
To track multiple maneuvering targets in cluttered environments with uncertain measurement noises and uncertain target dynamic models, an improved joint probabilistic data association-fuzzy recursive least squares filter (IJPDA-FRLSF) is proposed. In the proposed filter, two uncertain models of measurements and observed angles are
[...] Read more.
To track multiple maneuvering targets in cluttered environments with uncertain measurement noises and uncertain target dynamic models, an improved joint probabilistic data association-fuzzy recursive least squares filter (IJPDA-FRLSF) is proposed. In the proposed filter, two uncertain models of measurements and observed angles are first established. Next, these two models are further employed to construct an additive fusion strategy, which is then utilized to calculate generalized joint association probabilities of measurements belonging to different targets. Moreover, the obtained probabilities are applied to replace the joint association probabilities calculated by the standard joint probabilistic data association (JPDA) method. Considering the advantage of the fuzzy recursive least squares filter (FRLSF) on tracking a single maneuvering target, which can relax the restrictive assumption of measurement noise covariances and target dynamic models, FRLSF is still used to update the state of each target track. Thus, the proposed filter can not only provide the advantage of FRLSF but can also adjust the weights of measurements and observed angles in the generalized joint association probabilities adaptively according to their uncertainty. The performance of the proposed filter is evaluated in two experiments with simulation data and real data. It is found to be better than the performance of other three filters in terms of the tracking accuracy and the average run time. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle A Mobile Acquisition System and a Method for Hips Sway Fluency Assessment
Information 2018, 9(12), 321; https://doi.org/10.3390/info9120321
Received: 31 October 2018 / Revised: 8 December 2018 / Accepted: 10 December 2018 / Published: 12 December 2018
PDF Full-text (664 KB) | HTML Full-text | XML Full-text
Abstract
The present contribution focuses on the estimation of the Cartesian kinematic jerk of the hips’ orientation during a full three-dimensional movement in the context of enabling eHealth applications of advanced mathematical signal analysis. The kinematic jerk index is estimated on the basis of
[...] Read more.
The present contribution focuses on the estimation of the Cartesian kinematic jerk of the hips’ orientation during a full three-dimensional movement in the context of enabling eHealth applications of advanced mathematical signal analysis. The kinematic jerk index is estimated on the basis of gyroscopic signals acquired offline through a smartphone. A specific free mobile application is used to acquire the gyroscopic signals and to transmit them to a personal computer through a wireless network. The personal computer elaborates the acquired data and returns the kinematic jerk index associated with a motor task. A comparison of the kinematic jerk index value on a number of data sets confirms that such index can be used to evaluate the fluency of hips orientation during motion. The present research confirms that the proposed gyroscopic data acquisition/processing setup constitutes an inexpensive and portable solution to motion fluency analysis. The proposed data-acquisition and data-processing setup may serve as a supporting eHealth technology in clinical bio-mechanics as well as in sports science. Full article
(This article belongs to the Special Issue eHealth and Artificial Intelligence)
Figures

Figure 1

Open AccessArticle An Empirical Study of Exhaustive Matching for Improving Motion Field Estimation
Information 2018, 9(12), 320; https://doi.org/10.3390/info9120320
Received: 20 October 2018 / Revised: 6 December 2018 / Accepted: 7 December 2018 / Published: 12 December 2018
PDF Full-text (17968 KB) | HTML Full-text | XML Full-text
Abstract
Optical flow is defined as the motion field of pixels between two consecutive images. Traditionally, in order to estimate pixel motion field (or optical flow), an energy model is proposed. This energy model is composed of (i) a data term and (ii) a
[...] Read more.
Optical flow is defined as the motion field of pixels between two consecutive images. Traditionally, in order to estimate pixel motion field (or optical flow), an energy model is proposed. This energy model is composed of (i) a data term and (ii) a regularization term. The data term is an optical flow error estimation and the regularization term imposes spatial smoothness. Traditional variational models use a linearization in the data term. This linearized version of data term fails when the displacement of the object is larger than its own size. Recently, the precision of the optical flow method has been increased due to the use of additional information, obtained from correspondences computed between two images obtained by different methods such as SIFT, deep-matching, and exhaustive search. This work presents an empirical study in order to evaluate different strategies for locating exhaustive correspondences improving flow estimation. We considered a different location for matching random locations, uniform locations, and locations on maximum gradient magnitude. Additionally, we tested the combination of large and medium gradients with uniform locations. We evaluated our methodology in the MPI-Sintel database, which represents the state-of-the-art evaluation databases. Our results in MPI-Sintel show that our proposal outperforms classical methods such as Horn-Schunk, TV-L1, and LDOF, and our method performs similar to MDP-Flow. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle A Diabetes Management Information System with Glucose Prediction
Information 2018, 9(12), 319; https://doi.org/10.3390/info9120319
Received: 31 October 2018 / Revised: 6 December 2018 / Accepted: 7 December 2018 / Published: 12 December 2018
PDF Full-text (1034 KB) | HTML Full-text | XML Full-text
Abstract
Diabetes has become a serious health concern. The use and popularization of blood glucose measurement devices have led to a tremendous increase on health for diabetics. Tracking and maintaining traceability between glucose measurements, insulin doses and carbohydrate intake can provide useful information to
[...] Read more.
Diabetes has become a serious health concern. The use and popularization of blood glucose measurement devices have led to a tremendous increase on health for diabetics. Tracking and maintaining traceability between glucose measurements, insulin doses and carbohydrate intake can provide useful information to physicians, health professionals, and patients. This paper presents an information system, called GLUMIS (GLUcose Management Information System), aimed to support diabetes management activities. It is made of two modules, one for glucose prediction and one for data visualization and a reasoner to aid users in their treatment. Through integration with glucose measurement devices, it is possible to collect historical data on the treatment. In addition, the integration with a tool called the REALI System allows GLUMIS to also process data on insulin doses and eating habits. Quantitative and qualitative data were collected through an experimental case study involving 10 participants. It was able to demonstrate that the GLUMIS system is feasible. It was able to discover rules for predicting future values of blood glucose by processing the past history of measurements. Then, it presented reports that can help diabetics choose the amount of insulin they should take and the amount of carbohydrate they should consume during the day. Rules found by using one patient’s measurements were analyzed by a specialist that found three of them to be useful for improving the patient’s treatment. One such rule was “if glucose before breakfast [ 47 , 89 ] , then glucose at afternoon break in [ 160 , 306 ]”. The results obtained through the experimental study and other verifications associated with the algorithm created had a double objective. It was possible to show that participants, through a questionnaire, viewed the visualizations as easy, or very easy, to understand. The secondary objective showed that the innovative algorithm applied in the GLUMIS system allows the decision maker to have much more precision and less loss of information than in algorithms that require the data to be discretized. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle A Soft Body Physics Simulator with Computational Offloading to the Cloud
Information 2018, 9(12), 318; https://doi.org/10.3390/info9120318
Received: 26 November 2018 / Revised: 5 December 2018 / Accepted: 7 December 2018 / Published: 11 December 2018
PDF Full-text (3681 KB) | HTML Full-text | XML Full-text
Abstract
We describe the gamification of a soft physics simulator. We developed a game, called Jelly Dude, that allows the player to change and modify the game engine by tinkering with various physics parameters, creating custom game levels and installing scripts. The game engine
[...] Read more.
We describe the gamification of a soft physics simulator. We developed a game, called Jelly Dude, that allows the player to change and modify the game engine by tinkering with various physics parameters, creating custom game levels and installing scripts. The game engine is capable of simulating soft-body physics and can display the simulation results visually in real-time. In order to ensure high quality graphics in real time, we have implemented intelligent computational offloading to the cloud using Jordan Neural Network (JNN) with a fuzzy logic scheme for short time prediction of network traffic between a client and a cloud server. The experimental results show that computation offloading allowed us to increase the speed of graphics rendering in terms of frames per second, and to improve the precision of soft body modeling in terms of the number of particles used to represent a soft body. Full article
(This article belongs to the Special Issue Cloud Gamification)
Figures

Figure 1

Open AccessArticle LICIC: Less Important Components for Imbalanced Multiclass Classification
Information 2018, 9(12), 317; https://doi.org/10.3390/info9120317
Received: 22 October 2018 / Revised: 19 November 2018 / Accepted: 6 December 2018 / Published: 9 December 2018
PDF Full-text (3647 KB) | HTML Full-text | XML Full-text
Abstract
Multiclass classification in cancer diagnostics, using DNA or Gene Expression Signatures, but also classification of bacteria species fingerprints in MALDI-TOF mass spectrometry data, is challenging because of imbalanced data and the high number of dimensions with respect to the number of instances. In
[...] Read more.
Multiclass classification in cancer diagnostics, using DNA or Gene Expression Signatures, but also classification of bacteria species fingerprints in MALDI-TOF mass spectrometry data, is challenging because of imbalanced data and the high number of dimensions with respect to the number of instances. In this study, a new oversampling technique called LICIC will be presented as a valuable instrument in countering both class imbalance, and the famous “curse of dimensionality” problem. The method enables preservation of non-linearities within the dataset, while creating new instances without adding noise. The method will be compared with other oversampling methods, such as Random Oversampling, SMOTE, Borderline-SMOTE, and ADASYN. F1 scores show the validity of this new technique when used with imbalanced, multiclass, and high-dimensional datasets. Full article
(This article belongs to the Special Issue eHealth and Artificial Intelligence)
Figures

Figure 1

Open AccessArticle Towards Expert-Based Speed–Precision Control in Early Simulator Training for Novice Surgeons
Information 2018, 9(12), 316; https://doi.org/10.3390/info9120316
Received: 14 October 2018 / Revised: 1 December 2018 / Accepted: 5 December 2018 / Published: 9 December 2018
PDF Full-text (2442 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Simulator training for image-guided surgical interventions would benefit from intelligent systems that detect the evolution of task performance, and take control of individual speed–precision strategies by providing effective automatic performance feedback. At the earliest training stages, novices frequently focus on getting faster at
[...] Read more.
Simulator training for image-guided surgical interventions would benefit from intelligent systems that detect the evolution of task performance, and take control of individual speed–precision strategies by providing effective automatic performance feedback. At the earliest training stages, novices frequently focus on getting faster at the task. This may, as shown here, compromise the evolution of their precision scores, sometimes irreparably, if it is not controlled for as early as possible. Artificial intelligence could help make sure that a trainee reaches her/his optimal individual speed–accuracy trade-off by monitoring individual performance criteria, detecting critical trends at any given moment in time, and alerting the trainee as early as necessary when to slow down and focus on precision, or when to focus on getting faster. It is suggested that, for effective benchmarking, individual training statistics of novices are compared with the statistics of an expert surgeon. The speed–accuracy functions of novices trained in a large number of experimental sessions reveal differences in individual speed–precision strategies, and clarify why such strategies should be automatically detected and controlled for before further training on specific surgical task models, or clinical models, may be envisaged. How expert benchmark statistics may be exploited for automatic performance control is explained. Full article
(This article belongs to the Special Issue eHealth and Artificial Intelligence)
Figures

Graphical abstract

Open AccessArticle Empirical Study on the Factors Influencing Process Innovation When Adopting Intelligent Robots at Small- and Medium-Sized Enterprises—The Role of Organizational Supports
Information 2018, 9(12), 315; https://doi.org/10.3390/info9120315
Received: 27 November 2018 / Revised: 6 December 2018 / Accepted: 6 December 2018 / Published: 8 December 2018
PDF Full-text (530 KB) | HTML Full-text | XML Full-text
Abstract
Robot technology at small- and medium-sized enterprises has become a crucial part of current business operations. Beginning with the manufacturing industry, more industries than ever before have recently begun making use of robot technology to increase operational efficiency and productivity. However, prior studies
[...] Read more.
Robot technology at small- and medium-sized enterprises has become a crucial part of current business operations. Beginning with the manufacturing industry, more industries than ever before have recently begun making use of robot technology to increase operational efficiency and productivity. However, prior studies regarding innovation related to intelligent robot use have been limited to developing strategies for describing robot technologies in general. Therefore, we developed a research model for investigating process innovation as it relates to intelligent robots. Based on the literature, two variables of technology benefits (direct usefulness and indirect usefulness) and two constructs of environmental pressure (industry and government) were incorporated into the research model as key determinants of a firm’s process innovation. Furthermore, organizational supports as moderating variables were added to the relationship between technology benefits and process innovation. We collected 257 responses in managerial position at various firms in order to test the proposed hypotheses using structural equation modeling in the statistical software (AMOS 22.0). The results revealed that all variables have a significant impact on process innovation, as well as the moderator. The findings of this study provide theoretical and practical implications for process innovation based on intelligent robot technology. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessArticle A Quick Algorithm for Binary Discernibility Matrix Simplification using Deterministic Finite Automata
Information 2018, 9(12), 314; https://doi.org/10.3390/info9120314
Received: 30 October 2018 / Revised: 3 December 2018 / Accepted: 6 December 2018 / Published: 7 December 2018
PDF Full-text (425 KB)
Abstract
The binary discernibility matrix, originally introduced by Felix and Ushio, is a binary matrix representation for storing discernible attributes that can distinguish different objects in decision systems. It is an effective approach for feature selection, knowledge representation and uncertainty reasoning. An original binary
[...] Read more.
The binary discernibility matrix, originally introduced by Felix and Ushio, is a binary matrix representation for storing discernible attributes that can distinguish different objects in decision systems. It is an effective approach for feature selection, knowledge representation and uncertainty reasoning. An original binary discernibility matrix usually contains redundant objects and attributes. These redundant objects and attributes may deteriorate the performance of feature selection and knowledge acquisition. To overcome this shortcoming, row relations and column relations in a binary discernibility matrix are defined in this paper. To compare the relationships of different rows (columns) quickly, we construct deterministic finite automata for a binary discernibility matrix. On this basis, a quick algorithm for binary discernibility matrix simplification using deterministic finite automata (BDMSDFA) is proposed. We make a comparison of BDMR (an algorithm of binary discernibility matrix reduction), IBDMR (an improved algorithm of binary discernibility matrix reduction) and BDMSDFA. Finally, theoretical analyses and experimental results indicate that the algorithm of BDMSDFA is effective and efficient. Full article
Open AccessArticle Low-Complexity Synchronization Scheme with Low-Resolution ADCs
Information 2018, 9(12), 313; https://doi.org/10.3390/info9120313
Received: 27 October 2018 / Revised: 23 November 2018 / Accepted: 6 December 2018 / Published: 7 December 2018
PDF Full-text (542 KB)
Abstract
An important function of next-generation (5G) and beyond mobile communication systems is aim to provide thousand-fold capacity growth and to support high-speed data transmission up to several megabits per second. However, the research community and industries have to face a dilemma of power
[...] Read more.
An important function of next-generation (5G) and beyond mobile communication systems is aim to provide thousand-fold capacity growth and to support high-speed data transmission up to several megabits per second. However, the research community and industries have to face a dilemma of power consumption and hardware design to satisfy the increasing communication requirements. For the purpose of improving the system cost, power consumption, and implementation complexity, a novel scheme of symbol timing and frequency offset estimation with low-resolution analog-to-digital converters (ADCs) based on an orthogonal frequency division multiplexing ultra-wideband (OFDM-UWB) system is proposed in this paper. In our work, we first verified the principle that the autocorrelation of the pseudo-noise (PN) sequences was not affected by low-resolution quantization. With the help of this property, the timing synchronization could be strongly implemented against the influence of low-resolution quantization. Then, the transmitted signal structure and low-resolution quantization scheme under the synchronization scheme were designed. Finally, a frequency offset estimation model with one-bit timing synchronization was established. Theoretical analysis and simulation results corroborate that the performance of the proposed scheme not only approximates to that of the full-resolution synchronization scheme, but also has lower power consumption and computational complexity. Full article
Open AccessArticle A Fuzzy EWMA Attribute Control Chart to Monitor Process Mean
Information 2018, 9(12), 312; https://doi.org/10.3390/info9120312
Received: 30 September 2018 / Revised: 18 November 2018 / Accepted: 4 December 2018 / Published: 7 December 2018
PDF Full-text (499 KB) | HTML Full-text | XML Full-text
Abstract
Conventional control charts are one of the most important techniques in statistical process control which are used to assess the performance of processes to see whether they are in- or out-of-control. As traditional control charts deal with crisp data, they are not suitable
[...] Read more.
Conventional control charts are one of the most important techniques in statistical process control which are used to assess the performance of processes to see whether they are in- or out-of-control. As traditional control charts deal with crisp data, they are not suitable to study unclear, vague, and fuzzy data. In many real-world applications, however, the data to be used in a control charting method are not crisp since they are approximated due to environmental uncertainties and systematic ambiguities involved in the systems under investigation. In these situations, fuzzy numbers and linguistic variables are used to grab such uncertainties. That is why the use of a fuzzy control chart, in which fuzzy data are used, is justified. As an exponentially weighted moving average (EWMA) scheme is usually used to detect small shifts, in this paper a fuzzy EWMA (F-EWMA) control chart is proposed to detect small shifts in the process mean when fuzzy data are available. The application of the newly developed fuzzy control chart is illustrated using real-life data. Full article
Figures

Figure 1

Open AccessArticle Accident Prediction System Based on Hidden Markov Model for Vehicular Ad-Hoc Network in Urban Environments
Information 2018, 9(12), 311; https://doi.org/10.3390/info9120311
Received: 22 October 2018 / Revised: 24 November 2018 / Accepted: 5 December 2018 / Published: 7 December 2018
PDF Full-text (4590 KB) | HTML Full-text | XML Full-text
Abstract
With the emergence of autonomous vehicles and internet of vehicles (IoV), future roads of smart cities will have a combination of autonomous and automated vehicles with regular vehicles that require human operators. To ensure the safety of the road commuters in such a
[...] Read more.
With the emergence of autonomous vehicles and internet of vehicles (IoV), future roads of smart cities will have a combination of autonomous and automated vehicles with regular vehicles that require human operators. To ensure the safety of the road commuters in such a network, it is imperative to enhance the performance of Advanced Driver Assistance Systems (ADAS). Real-time driving risk prediction is a fundamental part of an ADAS. Many driving risk prediction systems have been proposed. However, most of them are based only on vehicle’s velocity. But in most of the accident scenarios, other factors are also involved, such as weather conditions or driver fatigue. In this paper, we proposed an accident prediction system for Vehicular ad hoc networks (VANETs) in urban environments, in which we considered the crash risk as a latent variable that can be observed using multi-observation such as velocity, weather condition, risk location, nearby vehicles density and driver fatigue. A Hidden Markov Model (HMM) was used to model the correlation between these observations and the latent variable. Simulation results showed that the proposed system has a better performance in terms of sensitivity and precision compared to state of the art single factor schemes. Full article
(This article belongs to the Special Issue Vehicular Networks and Applications)
Figures

Figure 1

Open AccessArticle Integration of Web APIs and Linked Data Using SPARQL Micro-Services—Application to Biodiversity Use Cases
Information 2018, 9(12), 310; https://doi.org/10.3390/info9120310
Received: 9 November 2018 / Revised: 3 December 2018 / Accepted: 3 December 2018 / Published: 6 December 2018
Viewed by 73 | PDF Full-text (1542 KB)
Abstract
In recent years, Web APIs have become a de facto standard for exchanging machinereadable data on the Web. Despite this success, however, they often fail in making resource descriptions interoperable due to the fact that they rely on proprietary vocabularies that lack formal
[...] Read more.
In recent years, Web APIs have become a de facto standard for exchanging machinereadable data on the Web. Despite this success, however, they often fail in making resource descriptions interoperable due to the fact that they rely on proprietary vocabularies that lack formal semantics. The Linked Data principles similarly seek the massive publication of data on the Web, yet with the specific goal of ensuring semantic interoperability. Given their complementary goals, it is commonly admitted that cross-fertilization could stem from the automatic combination of Linked Data and Web APIs. Towards this goal, in this paper we leverage the micro-service architectural principles to define a SPARQL Micro-Service architecture, aimed at querying Web APIs using SPARQL. A SPARQL micro-service is a lightweight SPARQL endpoint that provides access to a small, resource-centric, virtual graph. In this context, we argue that full SPARQL Query expressiveness can be supported efficiently without jeopardizing servers availability. Furthermore, we demonstrate how this architecture can be used to dynamically assign dereferenceable URIs to Web API resources that do not have URIs beforehand, thus literally “bringing” Web APIs into the Web of Data. We believe that the emergence of an ecosystem of SPARQL micro-services published by independent providers would enable Linked Data-based applications to easily glean pieces of data from a wealth of distributed, scalable, and reliable services. We describe a working prototype implementation and we finally illustrate the use of SPARQL micro-services in the context of two real-life use cases related to the biodiversity domain, developed in collaboration with the French National Museum of Natural History. Full article
(This article belongs to the Special Issue Semantics for Big Data Integration)
Open AccessArticle Pareidolic and Uncomplex Technological Singularity
Information 2018, 9(12), 309; https://doi.org/10.3390/info9120309
Received: 25 October 2018 / Revised: 30 November 2018 / Accepted: 3 December 2018 / Published: 6 December 2018
Viewed by 71 | PDF Full-text (337 KB) | HTML Full-text | XML Full-text
Abstract
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the
[...] Read more.
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the perverse power of apophenia (“the tendency to perceive a connection or meaningful pattern between unrelated or random things (such as objects or ideas)”) and pereidolia (“the tendency to perceive a specific, often meaningful image in a random or ambiguous visual pattern”) in those studies is the starting point for two claims: the “accelerated change” is a future-related apophenia case, whereas AGI (and TS) are future-related pareidolia cases. A short presentation of research-focused social networks working to solve complex problems reveals the superiority of human networked minds over the hardware‒software systems and suggests the opportunity for a network-based study of TS (and AGI) from a complexity perspective. It could compensate for the weaknesses of approaches deployed from a linear and predictable perspective, in order to try to redesign our intelligent artifacts. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle An Image Enhancement Method Based on Non-Subsampled Shearlet Transform and Directional Information Measurement
Information 2018, 9(12), 308; https://doi.org/10.3390/info9120308
Received: 16 October 2018 / Revised: 29 November 2018 / Accepted: 3 December 2018 / Published: 6 December 2018
Viewed by 73 | PDF Full-text (4684 KB) | HTML Full-text | XML Full-text
Abstract
Based on the advantages of a non-subsampled shearlet transform (NSST) in image processing and the characteristics of remote sensing imagery, NSST was applied to enhance blurred images. In the NSST transform domain, directional information measurement can highlight textural features of an image edge
[...] Read more.
Based on the advantages of a non-subsampled shearlet transform (NSST) in image processing and the characteristics of remote sensing imagery, NSST was applied to enhance blurred images. In the NSST transform domain, directional information measurement can highlight textural features of an image edge and reduce image noise. Therefore, NSST was applied to the detailed enhancement of high-frequency sub-band coefficients. Based on the characteristics of a low-frequency image, the retinex method was used to enhance low-frequency images. Then, an NSST inverse transformation was performed on the enhanced low- and high-frequency coefficients to obtain an enhanced image. Computer simulation experiments showed that when compared with a traditional image enhancement strategy, the method proposed in this paper can enrich the details of the image and enhance the visual effect of the image. Compared with other algorithms listed in this paper, the brightness, contrast, edge strength, and information entropy of the enhanced image by this method are improved. In addition, in the experiment of noisy images, various objective evaluation indices show that the method in this paper enhances the image with the least noise information, which further indicates that the method can suppress noise while improving the image quality, and has a certain level of effectiveness and practicability. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Improving the Accuracy in Sentiment Classification in the Light of Modelling the Latent Semantic Relations
Information 2018, 9(12), 307; https://doi.org/10.3390/info9120307
Received: 15 October 2018 / Revised: 19 November 2018 / Accepted: 28 November 2018 / Published: 4 December 2018
Viewed by 152 | PDF Full-text (908 KB) | HTML Full-text | XML Full-text
Abstract
The research presents the methodology of improving the accuracy in sentiment classification in the light of modelling the latent semantic relations (LSR). The objective of this methodology is to find ways of eliminating the limitations of the discriminant and probabilistic methods for LSR
[...] Read more.
The research presents the methodology of improving the accuracy in sentiment classification in the light of modelling the latent semantic relations (LSR). The objective of this methodology is to find ways of eliminating the limitations of the discriminant and probabilistic methods for LSR revealing and customizing the sentiment classification process (SCP) to the more accurate recognition of text tonality. This objective was achieved by providing the possibility of the joint usage of the following methods: (1) retrieval and recognition of the hierarchical semantic structure of the text and (2) development of the hierarchical contextually-oriented sentiment dictionary in order to perform the context-sensitive SCP. The main scientific contribution of this research is the set of the following approaches: at the phase of LSR revealing (1) combination of the discriminant and probabilistic models while applying the rules of adjustments to obtain the final joint result; at all SCP phases (2) considering document as a complex structure of topically completed textual components (paragraphs) and (3) taking into account the features of persuasive documents’ type. The experimental results have demonstrated the enhancement of the SCP accuracy, namely significant increase of average values of recall and precision indicators and guarantee of sufficient accuracy level. Full article
(This article belongs to the Special Issue Knowledge Engineering and Semantic Web)
Figures

Figure 1

Open AccessArticle Social Customer Relationship Management and Organizational Characteristics
Information 2018, 9(12), 306; https://doi.org/10.3390/info9120306
Received: 2 November 2018 / Revised: 23 November 2018 / Accepted: 29 November 2018 / Published: 2 December 2018
Viewed by 168 | PDF Full-text (702 KB) | HTML Full-text | XML Full-text
Abstract
Social customer relationship management (SCRM) is a new philosophy influencing the relationship between customer and organization where the customer gets the opportunity to control the relationship through social media. This paper aims to identify (a) the current level of SCRM and (b) the
[...] Read more.
Social customer relationship management (SCRM) is a new philosophy influencing the relationship between customer and organization where the customer gets the opportunity to control the relationship through social media. This paper aims to identify (a) the current level of SCRM and (b) the influence of basic organizational characteristics on the SCRM level. The data were gathered through a questionnaire distributed to 362 organizations headquartered in the Czech Republic. The questionnaire comprised 54 questions focusing on the significance of marketing and CRM practices, establishing a relationship with the customer, online communities, the use of social media in marketing, and acquiring and managing information. Scalable questions with a typical five-level Likert scale were applied in the questionnaire. The results show that larger firms more often set up their own online communities and manage them strategically; moreover, they are able to manage information better. Contrariwise, small-sized organizations use social networks as a way to establish communication with the customer more than large-sized entities. The use of social media for marketing purposes is significantly higher in organizations oriented to consumer markets than in those oriented to business markets. Full article
Figures

Figure 1

Open AccessEditorial Dark-Web Cyber Threat Intelligence: From Data to Intelligence to Prediction
Information 2018, 9(12), 305; https://doi.org/10.3390/info9120305
Received: 29 November 2018 / Accepted: 29 November 2018 / Published: 1 December 2018
Viewed by 143 | PDF Full-text (132 KB) | HTML Full-text | XML Full-text
Abstract
Scientific work that leverages information about communities on the deep and dark web has opened up new angles in the field of security informatics. [...] Full article
(This article belongs to the Special Issue Darkweb Cyber Threat Intelligence Mining)
Open AccessArticle Towards the Representation of Etymological Data on the Semantic Web
Information 2018, 9(12), 304; https://doi.org/10.3390/info9120304
Received: 15 September 2018 / Revised: 31 October 2018 / Accepted: 12 November 2018 / Published: 30 November 2018
Viewed by 183 | PDF Full-text (1880 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we look at the potential for a wide-coverage modelling of etymological information as linked data using the Resource Data Framework (RDF) data model. We begin with a discussion of some of the most typical features of etymological data and the
[...] Read more.
In this article, we look at the potential for a wide-coverage modelling of etymological information as linked data using the Resource Data Framework (RDF) data model. We begin with a discussion of some of the most typical features of etymological data and the challenges that these might pose to an RDF-based modelling. We then propose a new vocabulary for representing etymological data, the Ontolex-lemon Etymological Extension (lemonETY), based on the ontolex-lemon model. Each of the main elements of our new model is motivated with reference to the preceding discussion. Full article
(This article belongs to the Special Issue Towards the Multilingual Web of Data)
Figures

Figure 1

Open AccessArticle Evaluating User Behaviour in a Cooperative Environment
Information 2018, 9(12), 303; https://doi.org/10.3390/info9120303
Received: 15 October 2018 / Revised: 25 November 2018 / Accepted: 27 November 2018 / Published: 30 November 2018
Viewed by 134 | PDF Full-text (597 KB) | HTML Full-text | XML Full-text
Abstract
Big Data, as a new paradigm, has forced both researchers and industries to rethink data management techniques which has become inadequate in many contexts. Indeed, we deal everyday with huge amounts of collected data about user suggestions and searches. These data require new
[...] Read more.
Big Data, as a new paradigm, has forced both researchers and industries to rethink data management techniques which has become inadequate in many contexts. Indeed, we deal everyday with huge amounts of collected data about user suggestions and searches. These data require new advanced analysis strategies to be devised in order to profitably leverage this information. Moreover, due to the heterogeneous and fast changing nature of these data, we need to leverage new data storage and management tools to effectively store them. In this paper, we analyze the effect of user searches and suggestions and try to understand how much they influence a user’s social environment. This task is crucial to perform efficient identification of the users that are able to spread their influence across the network. Gathering information about user preferences is a key activity in several scenarios like tourism promotion, personalized marketing, and entertainment suggestions. We show the application of our approach for a huge research project named D-ALL that stands for Data Alliance. In fact, we tried to assess the reaction of users in a competitive environment when they were invited to judge each other. Our results show that the users tend to conform to each other when no tangible rewards are provided while they try to reduce other users’ ratings when it affects getting a tangible prize. Full article
(This article belongs to the Special Issue Advanced Learning Methods for Complex Data)
Figures

Figure 1

Open AccessArticle Ontology-Based Representation for Accessible OpenCourseWare Systems
Information 2018, 9(12), 302; https://doi.org/10.3390/info9120302
Received: 29 October 2018 / Revised: 23 November 2018 / Accepted: 26 November 2018 / Published: 29 November 2018
Viewed by 116 | PDF Full-text (690 KB) | HTML Full-text | XML Full-text
Abstract
OpenCourseWare (OCW) systems have been established to provide open educational resources that are accessible by anyone, including learners with special accessibility needs and preferences. We need to find a formal and interoperable way to describe these preferences in order to use them in
[...] Read more.
OpenCourseWare (OCW) systems have been established to provide open educational resources that are accessible by anyone, including learners with special accessibility needs and preferences. We need to find a formal and interoperable way to describe these preferences in order to use them in OCW systems and retrieve relevant educational resources. This formal representation should use standard accessibility definitions of OCW that can be reused by other OCW systems to represent accessibility concepts. In this article, we present an ontology to represent the accessibility needs of learners with respect to the IMS AfA specifications. The ontology definitions together with rule-based queries are used to retrieve relevant educational resources. Related to this, we developed a user interface component that enables users to create accessibility profiles representing their individual needs and preferences based on our ontology. We evaluated the approach with five examples profiles. Full article
(This article belongs to the Special Issue Knowledge Engineering and Semantic Web)
Figures

Figure 1

Open AccessArticle An Inter-Frame Forgery Detection Algorithm for Surveillance Video
Information 2018, 9(12), 301; https://doi.org/10.3390/info9120301
Received: 16 August 2018 / Revised: 8 November 2018 / Accepted: 20 November 2018 / Published: 28 November 2018
Viewed by 120 | PDF Full-text (3792 KB) | HTML Full-text | XML Full-text
Abstract
Surveillance systems are ubiquitous in our lives, and surveillance videos are often used as significant evidence for judicial forensics. However, the authenticity of surveillance videos is difficult to guarantee. Ascertaining the authenticity of surveillance video is an urgent problem. Inter-frame forgery is one
[...] Read more.
Surveillance systems are ubiquitous in our lives, and surveillance videos are often used as significant evidence for judicial forensics. However, the authenticity of surveillance videos is difficult to guarantee. Ascertaining the authenticity of surveillance video is an urgent problem. Inter-frame forgery is one of the most common ways for video tampering. The forgery will reduce the correlation between adjacent frames at tampering position. Therefore, the correlation can be used to detect tamper operation. The algorithm is composed of feature extraction and abnormal point localization. During feature extraction, we extract the 2-D phase congruency of each frame, since it is a good image characteristic. Then calculate the correlation between the adjacent frames. In the second phase, the abnormal points were detected by using k-means clustering algorithm. The normal and abnormal points were clustered into two categories. Experimental results demonstrate that the scheme has high detection and localization accuracy. Full article
Figures

Figure 1

Open AccessArticle Multiple Criteria Decision-Making in Heterogeneous Groups of Management Experts
Information 2018, 9(12), 300; https://doi.org/10.3390/info9120300
Received: 31 October 2018 / Revised: 19 November 2018 / Accepted: 21 November 2018 / Published: 27 November 2018
Viewed by 160 | PDF Full-text (832 KB) | HTML Full-text | XML Full-text
Abstract
In commercial organizations operations, frequently some dynamic events occur which involve operational, managerial, and valuable information aspects. Then, in order to make a sound decision, the business professional could be supported by a Multi Criteria Decision-Making (MCDM) system for taking an external course
[...] Read more.
In commercial organizations operations, frequently some dynamic events occur which involve operational, managerial, and valuable information aspects. Then, in order to make a sound decision, the business professional could be supported by a Multi Criteria Decision-Making (MCDM) system for taking an external course of action, as, for instance, forecasting a new market or product, up to an inner decision concerning for instance, the volume of manufacture. Thus, managers need, in a collective manner, to analyze the actual problems, to evaluate various options according to diverse criteria, and finally choose the best solution from a set of various alternatives. Throughout these processes, uncertainty and hesitancy easily arise, when it comes to define and judge criteria or alternatives. Several approaches have been introduced to allow Decision Makers (DMs) to deal with. The Interval Multiplicative Preference Relations (IMPRs) approach is a useful technique and the basis of our proposed methodology to provide reliable consistent and in consensus IMPRs. In this manner, DMs’ choices are implicitly including their uncertainty while maintaining both an acceptable individual consistency, as well as group consensus levels. The present method is based on some recent results and an optimization algorithm to derive reliable consistent and in consensus IMPRs. In order to illustrate our results and compare them with other methodologies, a few examples are addressed and solved. Full article
Figures

Figure 1

Open AccessArticle Tri-SIFT: A Triangulation-Based Detection and Matching Algorithm for Fish-Eye Images
Information 2018, 9(12), 299; https://doi.org/10.3390/info9120299
Received: 6 November 2018 / Revised: 17 November 2018 / Accepted: 21 November 2018 / Published: 26 November 2018
Viewed by 135 | PDF Full-text (3504 KB) | HTML Full-text | XML Full-text
Abstract
Keypoint matching is of fundamental importance in computer vision applications. Fish-eye lenses are convenient in such applications that involve a very wide angle of view. However, their use has been limited by the lack of an effective matching algorithm. The Scale Invariant Feature
[...] Read more.
Keypoint matching is of fundamental importance in computer vision applications. Fish-eye lenses are convenient in such applications that involve a very wide angle of view. However, their use has been limited by the lack of an effective matching algorithm. The Scale Invariant Feature Transform (SIFT) algorithm is an important technique in computer vision to detect and describe local features in images. Thus, we present a Tri-SIFT algorithm, which has a set of modifications to the SIFT algorithm that improve the descriptor accuracy and matching performance for fish-eye images, while preserving its original robustness to scale and rotation. After the keypoint detection of the SIFT algorithm is completed, the points in and around the keypoints are back-projected to a unit sphere following a fish-eye camera model. To simplify the calculation in which the image is on the sphere, the form of descriptor is based on the modification of the Gradient Location and Orientation Histogram (GLOH). In addition, to improve the invariance to the scale and the rotation in fish-eye images, the gradient magnitudes are replaced by the area of the surface, and the orientation is calculated on the sphere. Extensive experiments demonstrate that the performance of our modified algorithms outweigh that of SIFT and other related algorithms for fish-eye images. Full article
Figures

Figure 1

Open AccessArticle Evaluating Evidence Reliability on the Basis of Intuitionistic Fuzzy Sets
Information 2018, 9(12), 298; https://doi.org/10.3390/info9120298
Received: 12 October 2018 / Revised: 20 November 2018 / Accepted: 21 November 2018 / Published: 25 November 2018
Viewed by 170 | PDF Full-text (249 KB) | HTML Full-text | XML Full-text
Abstract
The evaluation of evidence reliability is still an open topic, when prior knowledge is unavailable. In this paper, we propose a new method for evaluating evidence reliability, in the framework of intuitionistic fuzzy sets. The reliability of evidence was evaluated, based on the
[...] Read more.
The evaluation of evidence reliability is still an open topic, when prior knowledge is unavailable. In this paper, we propose a new method for evaluating evidence reliability, in the framework of intuitionistic fuzzy sets. The reliability of evidence was evaluated, based on the supporting degree between basic probability assignments (BPAs). The BPAs were first transformed to intuitionistic fuzzy sets (IFSs). By the similarity degree between the IFSs, we can get the supporting degree between the BPAs. Thus, the reliability of evidence can be evaluated, based on its connection with supporting degree. Based on the new evidence reliability, we developed a new method for combining evidence sources with different reliability degrades. Comparison with other methods was carried out to illustrate the effectiveness of the new method. Full article
(This article belongs to the Section Information Processes)
Open AccessArticle Semantic Modelling and Publishing of Traditional Data Collection Questionnaires and Answers
Information 2018, 9(12), 297; https://doi.org/10.3390/info9120297
Received: 15 September 2018 / Revised: 25 October 2018 / Accepted: 21 November 2018 / Published: 24 November 2018
Viewed by 249 | PDF Full-text (2248 KB) | HTML Full-text | XML Full-text
Abstract
Extensive collections of data of linguistic, historical and socio-cultural importance are stored in libraries, museums and national archives with enormous potential to support research. However, a sizable portion of the data remains underutilised because of a lack of the required knowledge to model
[...] Read more.
Extensive collections of data of linguistic, historical and socio-cultural importance are stored in libraries, museums and national archives with enormous potential to support research. However, a sizable portion of the data remains underutilised because of a lack of the required knowledge to model the data semantically and convert it into a format suitable for the semantic web. Although many institutions have produced digital versions of their collection, semantic enrichment, interlinking and exploration are still missing from digitised versions. In this paper, we present a model that provides structure and semantics to a non-standard linguistic and historical data collection on the example of the Bavarian dialects in Austria at the Austrian Academy of Sciences. We followed a semantic modelling approach that utilises the knowledge of domain experts and the corresponding schema produced during the data collection process. The model is used to enrich, interlink and publish the collection semantically. The dataset includes questionnaires and answers as well as supplementary information about the circumstances of the data collection (person, location, time, etc.). The semantic uplift is demonstrated by converting a subset of the collection to a Linked Open Data (LOD) format, where domain experts evaluated the model and the resulting dataset for its support of user queries. Full article
(This article belongs to the Special Issue Towards the Multilingual Web of Data)
Figures

Figure 1

Open AccessArticle An Efficient Robust Multiple Watermarking Algorithm for Vector Geographic Data
Information 2018, 9(12), 296; https://doi.org/10.3390/info9120296
Received: 31 October 2018 / Revised: 16 November 2018 / Accepted: 22 November 2018 / Published: 24 November 2018
Viewed by 200 | PDF Full-text (10174 KB) | HTML Full-text | XML Full-text
Abstract
Vector geographic data play an important role in location information services. Digital watermarking has been widely used in protecting vector geographic data from being easily duplicated by digital forensics. Because the production and application of vector geographic data refer to many units and
[...] Read more.
Vector geographic data play an important role in location information services. Digital watermarking has been widely used in protecting vector geographic data from being easily duplicated by digital forensics. Because the production and application of vector geographic data refer to many units and departments, the demand for multiple watermarking technology is increasing. However, multiple watermarking algorithm for vector geographic data draw less attention, and there are many urgent problems to be solved. Therefore, an efficient robust multiple watermark algorithm for vector geographic data is proposed in this paper. The coordinates in vector geographic data are first randomly divided into non-repetitive sets. The multiple watermarks are then embedded into the different sets. In watermark detection correlation, the Lindeberg theory is used to build a detection model and to confirm the detection threshold. Finally, experiments are made in order to demonstrate the detection algorithm, and to test its robustness against common attacks, especially against cropping attacks. The experimental results show that the proposed algorithm is robust against the deletion of vertices, addition of vertices, compression, and cropping attacks. Moreover, the proposed detection algorithm is compatible with single watermarking detection algorithms, and it has good performance in terms of detection efficiency. Full article
(This article belongs to the Special Issue The Security and Digital Forensics of Cloud Computing)
Figures

Figure 1

Back to Top