Next Issue
Previous Issue

Table of Contents

Information, Volume 9, Issue 12 (December 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-40
Export citation of selected articles as:
Open AccessArticle Artificial Intelligence and the Limitations of Information
Information 2018, 9(12), 332; https://doi.org/10.3390/info9120332
Received: 17 November 2018 / Revised: 7 December 2018 / Accepted: 18 December 2018 / Published: 19 December 2018
Viewed by 737 | PDF Full-text (750 KB) | HTML Full-text | XML Full-text
Abstract
Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in [...] Read more.
Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in which information evolves in information ecosystems. These limitations are caused by the combinatorial challenges associated with information processing, and by the tradeoffs driven by selection pressures. Analysis of the limitations explains some current difficulties with AI and machine learning and identifies the principles required to resolve the limitations when implementing AI and machine learning in organizations. Applying the same type of analysis to artificial general intelligence (AGI) highlights some key theoretical difficulties and gives some indications about the challenges of resolving them. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessEditorial Editorial for the Special Issue on “ROBOETHICS”
Information 2018, 9(12), 331; https://doi.org/10.3390/info9120331
Received: 18 December 2018 / Accepted: 18 December 2018 / Published: 19 December 2018
Viewed by 510 | PDF Full-text (183 KB) | HTML Full-text | XML Full-text
Abstract
Ethical and social issues of robotics have attracted increasing attention from the scientific and technical community over the years. These issues arise particularly in mental and sensitive robotic applications, such as robot-based rehabilitation, social robot (sociorobot) applications, and military robot applications. The purpose [...] Read more.
Ethical and social issues of robotics have attracted increasing attention from the scientific and technical community over the years. These issues arise particularly in mental and sensitive robotic applications, such as robot-based rehabilitation, social robot (sociorobot) applications, and military robot applications. The purpose of launching this Special Issue was to publish high-quality papers addressing timely and important aspects of roboethics, and to serve as a dissemination source of novel ideas demonstrating the necessity of roboethics. The papers finally included in the Special Issue deal with fundamental aspects and address interesting deep questions in the roboethics and robophililosophy field. Full article
(This article belongs to the Special Issue ROBOETHICS)
Open AccessArticle A Fuzzy Evaluation Model for Sustainable Modular Supplier
Information 2018, 9(12), 330; https://doi.org/10.3390/info9120330
Received: 18 October 2018 / Revised: 5 December 2018 / Accepted: 14 December 2018 / Published: 18 December 2018
Viewed by 505 | PDF Full-text (481 KB) | HTML Full-text | XML Full-text
Abstract
The evaluation and selection of a sustainable modular supplier is a strategic decision towards sustainability of manufacturing. However, few related studies have been conducted, particularly in the modular production field. In this paper, a fuzzy evaluation method is used to evaluate sustainable modular [...] Read more.
The evaluation and selection of a sustainable modular supplier is a strategic decision towards sustainability of manufacturing. However, few related studies have been conducted, particularly in the modular production field. In this paper, a fuzzy evaluation method is used to evaluate sustainable modular suppliers. Different from previous studies, in designing the index system of the fuzzy evaluation method, this study introduces an organizational identity perspective. What is more, an empirical study based on a formative model is conducted to design the index system. Both measures ensure the appropriateness of the index system. The stability of the fuzzy evaluation method is also discussed. By introducing a dispersion degree and discussing the different circumstances of subjective judgment errors, the stability analysis helps us to better understand the reliability of the results. Lastly, this study applies this method and the index system to a practical case. The results show that the fuzzy evaluation method is effective and can be used to evaluate sustainable modular suppliers. Full article
(This article belongs to the Section Information Applications)
Figures

Figure 1

Open AccessArticle Task Staggering Peak Scheduling Policy for Cloud Mixed Workloads
Information 2018, 9(12), 329; https://doi.org/10.3390/info9120329
Received: 7 November 2018 / Revised: 13 December 2018 / Accepted: 16 December 2018 / Published: 18 December 2018
Viewed by 541 | PDF Full-text (1585 KB) | HTML Full-text | XML Full-text
Abstract
To address the issue of cloud mixed workloads scheduling which might lead to system load imbalance and efficiency degradation in cloud computing, a novel cloud task staggering peak scheduling policy based on the task types and the resource load status is proposed. First, [...] Read more.
To address the issue of cloud mixed workloads scheduling which might lead to system load imbalance and efficiency degradation in cloud computing, a novel cloud task staggering peak scheduling policy based on the task types and the resource load status is proposed. First, based on different task characteristics, the task sequences submitted by the user are divided into queues of different types by the fuzzy clustering algorithm. Second, the Performance Counters (PMC) mechanism is introduced to dynamically monitor the load status of resource nodes and respectively sort the resources by the metrics of Central Processing Unit (CPU), memory, and input/output (I/O) load size, so as to reduce the candidate resources. Finally, the task sequences of specific type are scheduled for the corresponding light loaded resources, and the resources usage peak is staggered to achieve load balancing. The experimental results show that the proposed policy can balance loads and improve the system efficiency effectively and reduce the resource usage cost when the system is in the presence of mixed workloads. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessArticle Smart Antenna for Application in UAVs
Information 2018, 9(12), 328; https://doi.org/10.3390/info9120328
Received: 16 October 2018 / Revised: 4 December 2018 / Accepted: 5 December 2018 / Published: 18 December 2018
Viewed by 540 | PDF Full-text (7586 KB) | HTML Full-text | XML Full-text
Abstract
In the present paper, a smart planar electrically steerable passive array radiator (ESPAR) antenna was developed and tested at the frequency of 1.33 GHz with the main goal to control the main radiation lobe direction, ensuring precise communication between the antenna that is [...] Read more.
In the present paper, a smart planar electrically steerable passive array radiator (ESPAR) antenna was developed and tested at the frequency of 1.33 GHz with the main goal to control the main radiation lobe direction, ensuring precise communication between the antenna that is implemented in an unmanned aerial vehicle (UAV) and the base station. A control system was also developed and integrated into the communication system: an antenna coupled to the control system. The control system consists of an Arduino, a digital potentiometer, and an improved algorithm that allows defining the radiation-lobe direction as a function of the UAV flight needs. The ESPAR antenna was tested in an anechoic chamber with the control system coupled to it so that all previously established requirements were validated. Full article
Figures

Figure 1

Open AccessArticle Multi-Valued Neutrosophic Distance-Based QUALIFLEX Method for Treatment Selection
Information 2018, 9(12), 327; https://doi.org/10.3390/info9120327
Received: 20 November 2018 / Revised: 10 December 2018 / Accepted: 11 December 2018 / Published: 17 December 2018
Viewed by 515 | PDF Full-text (707 KB) | HTML Full-text | XML Full-text
Abstract
Multi-valued neutrosophic sets (MVNSs) consider the truth-membership, indeterminacy-membership, and falsity-membership simultaneously, which can more accurately express the preference information of decision-makers. In this paper, the normalized multi-valued neutrosophic distance measure is developed firstly and the corresponding properties are investigated as well. Secondly, the [...] Read more.
Multi-valued neutrosophic sets (MVNSs) consider the truth-membership, indeterminacy-membership, and falsity-membership simultaneously, which can more accurately express the preference information of decision-makers. In this paper, the normalized multi-valued neutrosophic distance measure is developed firstly and the corresponding properties are investigated as well. Secondly, the normalized multi-valued neutrosophic distance difference is defined and the corresponding partial ordering relation is discussed. Thirdly, based on the developed distances and comparison method, an extended multi-valued neutrosophic QUALItative FLEXible multiple criteria (QUALIFLEX) method is proposed to handle MCDM problems where the weights of criteria are completely unknown. Finally, an example for selection of medical diagnostic plan is provided to demonstrate the proposed method, together with sensitivity analysis and comparison analysis. Full article
Figures

Figure 1

Open AccessArticle Mobile Phone Recommender System Using Information Retrieval Technology by Integrating Fuzzy OWA and Gray Relational Analysis
Information 2018, 9(12), 326; https://doi.org/10.3390/info9120326
Received: 20 November 2018 / Revised: 2 December 2018 / Accepted: 12 December 2018 / Published: 14 December 2018
Viewed by 604 | PDF Full-text (409 KB) | HTML Full-text | XML Full-text
Abstract
With the advancement and diversification of information retrieval technology, such technology has been widely applied in recent years in personalized information recommender systems (RSs) and e-commerce RSs in addition to data-mining applications, especially with respect to mobile phone purchases. By integrating the weights [...] Read more.
With the advancement and diversification of information retrieval technology, such technology has been widely applied in recent years in personalized information recommender systems (RSs) and e-commerce RSs in addition to data-mining applications, especially with respect to mobile phone purchases. By integrating the weights of fuzzy ordered weighted averaging (OWA) and gray relational analysis, this research calculated the recommended F1 indices of three weight calculation methods to be 20.5%, 14.36%, and 16.43% after an examination by 30 experimenters. According to the operational results attained by the 30 experimenters, the recommended products obtained by the fuzzy OWA and gray relational analysis calculation method covered the products recommended by the other two weight calculation methods with a higher recommendation effect. Full article
(This article belongs to the Special Issue Modern Recommender Systems: Approaches, Challenges and Applications)
Figures

Figure 1

Open AccessArticle Exploring How Homophily and Accessibility Can Facilitate Polarization in Social Networks
Information 2018, 9(12), 325; https://doi.org/10.3390/info9120325
Received: 30 October 2018 / Revised: 1 December 2018 / Accepted: 11 December 2018 / Published: 14 December 2018
Viewed by 588 | PDF Full-text (1465 KB) | HTML Full-text | XML Full-text
Abstract
Polarization in online social networks has gathered a significant amount of attention in the research community and in the public sphere due to stark disagreements with millions of participants on topics surrounding politics, climate, the economy and other areas where an agreement is [...] Read more.
Polarization in online social networks has gathered a significant amount of attention in the research community and in the public sphere due to stark disagreements with millions of participants on topics surrounding politics, climate, the economy and other areas where an agreement is required. This work investigates into greater depth a type of model that can produce ideological segregation as a result of polarization depending on the strength of homophily and the ability of users to access similar minded individuals. Whether increased access can induce larger amounts of societal separation is important to investigate, and this work sheds further insight into the phenomenon. Center to the hypothesis of homophilic alignments in friendship generation is that of a discussion group or community. These are modeled and the investigation into their effect on the dynamics of polarization is presented. The social implications demonstrate that initial phases of an ideological exchange can result in increased polarization, although a consensus in the long run is expected and that the separation between groups is amplified when groups are constructed with ideological homophilic preferences. Full article
(This article belongs to the Special Issue Information Diffusion in Social Networks)
Figures

Figure 1

Open AccessArticle Gamified Software to Support the Design of Business Innovation
Information 2018, 9(12), 324; https://doi.org/10.3390/info9120324
Received: 30 September 2018 / Revised: 10 November 2018 / Accepted: 12 December 2018 / Published: 14 December 2018
Viewed by 533 | PDF Full-text (1299 KB) | HTML Full-text | XML Full-text
Abstract
Business innovation is a process that requires creativity, and benefits from extensive collaboration. Currently, computational support in creativity processes is low, but modern techniques would allow these processes to be sped up. In this context, we provide such a computational support with software [...] Read more.
Business innovation is a process that requires creativity, and benefits from extensive collaboration. Currently, computational support in creativity processes is low, but modern techniques would allow these processes to be sped up. In this context, we provide such a computational support with software for business innovation design that uses computational creativity techniques. Furthermore, the software enables a gamified process to increase user engagement and collaboration, which mimics evolutionary methods, relying on a voting mechanism. The software includes a business innovation ontology representing the domain knowledge that is used to generate and select a set of diverse preliminary representations of business ideas. Indeed, the most promising for novelty and potential impact are identified to ignite a business innovation game where team members collaborate to elaborate new innovation ideas based on those inputs until convergence to a shortlist of business model proposals. The main features of the approach are illustrated by means of a running example concerning innovative services for smart cities. Full article
Figures

Figure 1

Open AccessArticle Tradeoff Analysis between Spectral and Energy Efficiency Based on Sub-Channel Activity Index in Wireless Cognitive Radio Networks
Information 2018, 9(12), 323; https://doi.org/10.3390/info9120323
Received: 17 October 2018 / Revised: 1 December 2018 / Accepted: 4 December 2018 / Published: 14 December 2018
Viewed by 511 | PDF Full-text (2213 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, there has been a rapid evolution of wireless technologies that has led to the challenge of high demand for spectral resources. To overcome this challenge, good spectrum management is required that calls for more efficient use of the spectrum. In [...] Read more.
In recent years, there has been a rapid evolution of wireless technologies that has led to the challenge of high demand for spectral resources. To overcome this challenge, good spectrum management is required that calls for more efficient use of the spectrum. In this paper, we present a general system, which makes a tradeoff between the spectral efficiency (SE) and energy efficiency (EE) in the cellular cognitive radio networks (CCRN) with their respective limits. We have analyzed the system taking into account the different types of power used in the CCRN, namely the spectrum detection power (Zs) and the relay power (Zr). Optimal policy for emission power allocation formulated in the function of sub-channel activity index (SAI) as an optimization problem in order to maximize spectrum utilization and minimize the energy consumption in the base station of the secondary system energy consumption, is subject to different constraints of the main user system. We also evaluate the collaborative activity index of the sub-channel describing the activity of the primary users in the CCRN. The theoretical analyses and simulation results sufficiently demonstrate that the SE and EE relationship in the CCRN is not contrary and thus the achievement of optimal tradeoff between SE and EE. By making a rapprochement with a cognitive cellular network where SBSs adopts an equal power allocation strategy for sub-channels, the results of our proposed scheme indicate a significant improvement. Therefore, the model proposed in this paper offers a better tradeoff between SE and EE. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Improved Joint Probabilistic Data Association (JPDA) Filter Using Motion Feature for Multiple Maneuvering Targets in Uncertain Tracking Situations
Information 2018, 9(12), 322; https://doi.org/10.3390/info9120322
Received: 22 November 2018 / Revised: 8 December 2018 / Accepted: 8 December 2018 / Published: 13 December 2018
Viewed by 525 | PDF Full-text (4663 KB) | HTML Full-text | XML Full-text
Abstract
To track multiple maneuvering targets in cluttered environments with uncertain measurement noises and uncertain target dynamic models, an improved joint probabilistic data association-fuzzy recursive least squares filter (IJPDA-FRLSF) is proposed. In the proposed filter, two uncertain models of measurements and observed angles are [...] Read more.
To track multiple maneuvering targets in cluttered environments with uncertain measurement noises and uncertain target dynamic models, an improved joint probabilistic data association-fuzzy recursive least squares filter (IJPDA-FRLSF) is proposed. In the proposed filter, two uncertain models of measurements and observed angles are first established. Next, these two models are further employed to construct an additive fusion strategy, which is then utilized to calculate generalized joint association probabilities of measurements belonging to different targets. Moreover, the obtained probabilities are applied to replace the joint association probabilities calculated by the standard joint probabilistic data association (JPDA) method. Considering the advantage of the fuzzy recursive least squares filter (FRLSF) on tracking a single maneuvering target, which can relax the restrictive assumption of measurement noise covariances and target dynamic models, FRLSF is still used to update the state of each target track. Thus, the proposed filter can not only provide the advantage of FRLSF but can also adjust the weights of measurements and observed angles in the generalized joint association probabilities adaptively according to their uncertainty. The performance of the proposed filter is evaluated in two experiments with simulation data and real data. It is found to be better than the performance of other three filters in terms of the tracking accuracy and the average run time. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle A Mobile Acquisition System and a Method for Hips Sway Fluency Assessment
Information 2018, 9(12), 321; https://doi.org/10.3390/info9120321
Received: 31 October 2018 / Revised: 8 December 2018 / Accepted: 10 December 2018 / Published: 12 December 2018
Cited by 1 | Viewed by 549 | PDF Full-text (664 KB) | HTML Full-text | XML Full-text
Abstract
The present contribution focuses on the estimation of the Cartesian kinematic jerk of the hips’ orientation during a full three-dimensional movement in the context of enabling eHealth applications of advanced mathematical signal analysis. The kinematic jerk index is estimated on the basis of [...] Read more.
The present contribution focuses on the estimation of the Cartesian kinematic jerk of the hips’ orientation during a full three-dimensional movement in the context of enabling eHealth applications of advanced mathematical signal analysis. The kinematic jerk index is estimated on the basis of gyroscopic signals acquired offline through a smartphone. A specific free mobile application is used to acquire the gyroscopic signals and to transmit them to a personal computer through a wireless network. The personal computer elaborates the acquired data and returns the kinematic jerk index associated with a motor task. A comparison of the kinematic jerk index value on a number of data sets confirms that such index can be used to evaluate the fluency of hips orientation during motion. The present research confirms that the proposed gyroscopic data acquisition/processing setup constitutes an inexpensive and portable solution to motion fluency analysis. The proposed data-acquisition and data-processing setup may serve as a supporting eHealth technology in clinical bio-mechanics as well as in sports science. Full article
(This article belongs to the Special Issue eHealth and Artificial Intelligence)
Figures

Figure 1

Open AccessArticle An Empirical Study of Exhaustive Matching for Improving Motion Field Estimation
Information 2018, 9(12), 320; https://doi.org/10.3390/info9120320
Received: 20 October 2018 / Revised: 6 December 2018 / Accepted: 7 December 2018 / Published: 12 December 2018
Viewed by 587 | PDF Full-text (17968 KB) | HTML Full-text | XML Full-text
Abstract
Optical flow is defined as the motion field of pixels between two consecutive images. Traditionally, in order to estimate pixel motion field (or optical flow), an energy model is proposed. This energy model is composed of (i) a data term and (ii) a [...] Read more.
Optical flow is defined as the motion field of pixels between two consecutive images. Traditionally, in order to estimate pixel motion field (or optical flow), an energy model is proposed. This energy model is composed of (i) a data term and (ii) a regularization term. The data term is an optical flow error estimation and the regularization term imposes spatial smoothness. Traditional variational models use a linearization in the data term. This linearized version of data term fails when the displacement of the object is larger than its own size. Recently, the precision of the optical flow method has been increased due to the use of additional information, obtained from correspondences computed between two images obtained by different methods such as SIFT, deep-matching, and exhaustive search. This work presents an empirical study in order to evaluate different strategies for locating exhaustive correspondences improving flow estimation. We considered a different location for matching random locations, uniform locations, and locations on maximum gradient magnitude. Additionally, we tested the combination of large and medium gradients with uniform locations. We evaluated our methodology in the MPI-Sintel database, which represents the state-of-the-art evaluation databases. Our results in MPI-Sintel show that our proposal outperforms classical methods such as Horn-Schunk, TV-L1, and LDOF, and our method performs similar to MDP-Flow. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle A Diabetes Management Information System with Glucose Prediction
Information 2018, 9(12), 319; https://doi.org/10.3390/info9120319
Received: 31 October 2018 / Revised: 6 December 2018 / Accepted: 7 December 2018 / Published: 12 December 2018
Viewed by 557 | PDF Full-text (1034 KB) | HTML Full-text | XML Full-text
Abstract
Diabetes has become a serious health concern. The use and popularization of blood glucose measurement devices have led to a tremendous increase on health for diabetics. Tracking and maintaining traceability between glucose measurements, insulin doses and carbohydrate intake can provide useful information to [...] Read more.
Diabetes has become a serious health concern. The use and popularization of blood glucose measurement devices have led to a tremendous increase on health for diabetics. Tracking and maintaining traceability between glucose measurements, insulin doses and carbohydrate intake can provide useful information to physicians, health professionals, and patients. This paper presents an information system, called GLUMIS (GLUcose Management Information System), aimed to support diabetes management activities. It is made of two modules, one for glucose prediction and one for data visualization and a reasoner to aid users in their treatment. Through integration with glucose measurement devices, it is possible to collect historical data on the treatment. In addition, the integration with a tool called the REALI System allows GLUMIS to also process data on insulin doses and eating habits. Quantitative and qualitative data were collected through an experimental case study involving 10 participants. It was able to demonstrate that the GLUMIS system is feasible. It was able to discover rules for predicting future values of blood glucose by processing the past history of measurements. Then, it presented reports that can help diabetics choose the amount of insulin they should take and the amount of carbohydrate they should consume during the day. Rules found by using one patient’s measurements were analyzed by a specialist that found three of them to be useful for improving the patient’s treatment. One such rule was “if glucose before breakfast [ 47 , 89 ] , then glucose at afternoon break in [ 160 , 306 ]”. The results obtained through the experimental study and other verifications associated with the algorithm created had a double objective. It was possible to show that participants, through a questionnaire, viewed the visualizations as easy, or very easy, to understand. The secondary objective showed that the innovative algorithm applied in the GLUMIS system allows the decision maker to have much more precision and less loss of information than in algorithms that require the data to be discretized. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle A Soft Body Physics Simulator with Computational Offloading to the Cloud
Information 2018, 9(12), 318; https://doi.org/10.3390/info9120318
Received: 26 November 2018 / Revised: 5 December 2018 / Accepted: 7 December 2018 / Published: 11 December 2018
Viewed by 530 | PDF Full-text (3681 KB) | HTML Full-text | XML Full-text
Abstract
We describe the gamification of a soft physics simulator. We developed a game, called Jelly Dude, that allows the player to change and modify the game engine by tinkering with various physics parameters, creating custom game levels and installing scripts. The game engine [...] Read more.
We describe the gamification of a soft physics simulator. We developed a game, called Jelly Dude, that allows the player to change and modify the game engine by tinkering with various physics parameters, creating custom game levels and installing scripts. The game engine is capable of simulating soft-body physics and can display the simulation results visually in real-time. In order to ensure high quality graphics in real time, we have implemented intelligent computational offloading to the cloud using Jordan Neural Network (JNN) with a fuzzy logic scheme for short time prediction of network traffic between a client and a cloud server. The experimental results show that computation offloading allowed us to increase the speed of graphics rendering in terms of frames per second, and to improve the precision of soft body modeling in terms of the number of particles used to represent a soft body. Full article
(This article belongs to the Special Issue Cloud Gamification)
Figures

Figure 1

Open AccessArticle LICIC: Less Important Components for Imbalanced Multiclass Classification
Information 2018, 9(12), 317; https://doi.org/10.3390/info9120317
Received: 22 October 2018 / Revised: 19 November 2018 / Accepted: 6 December 2018 / Published: 9 December 2018
Viewed by 560 | PDF Full-text (3647 KB) | HTML Full-text | XML Full-text
Abstract
Multiclass classification in cancer diagnostics, using DNA or Gene Expression Signatures, but also classification of bacteria species fingerprints in MALDI-TOF mass spectrometry data, is challenging because of imbalanced data and the high number of dimensions with respect to the number of instances. In [...] Read more.
Multiclass classification in cancer diagnostics, using DNA or Gene Expression Signatures, but also classification of bacteria species fingerprints in MALDI-TOF mass spectrometry data, is challenging because of imbalanced data and the high number of dimensions with respect to the number of instances. In this study, a new oversampling technique called LICIC will be presented as a valuable instrument in countering both class imbalance, and the famous “curse of dimensionality” problem. The method enables preservation of non-linearities within the dataset, while creating new instances without adding noise. The method will be compared with other oversampling methods, such as Random Oversampling, SMOTE, Borderline-SMOTE, and ADASYN. F1 scores show the validity of this new technique when used with imbalanced, multiclass, and high-dimensional datasets. Full article
(This article belongs to the Special Issue eHealth and Artificial Intelligence)
Figures

Figure 1

Open AccessArticle Towards Expert-Based Speed–Precision Control in Early Simulator Training for Novice Surgeons
Information 2018, 9(12), 316; https://doi.org/10.3390/info9120316
Received: 14 October 2018 / Revised: 1 December 2018 / Accepted: 5 December 2018 / Published: 9 December 2018
Viewed by 592 | PDF Full-text (2442 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Simulator training for image-guided surgical interventions would benefit from intelligent systems that detect the evolution of task performance, and take control of individual speed–precision strategies by providing effective automatic performance feedback. At the earliest training stages, novices frequently focus on getting faster at [...] Read more.
Simulator training for image-guided surgical interventions would benefit from intelligent systems that detect the evolution of task performance, and take control of individual speed–precision strategies by providing effective automatic performance feedback. At the earliest training stages, novices frequently focus on getting faster at the task. This may, as shown here, compromise the evolution of their precision scores, sometimes irreparably, if it is not controlled for as early as possible. Artificial intelligence could help make sure that a trainee reaches her/his optimal individual speed–accuracy trade-off by monitoring individual performance criteria, detecting critical trends at any given moment in time, and alerting the trainee as early as necessary when to slow down and focus on precision, or when to focus on getting faster. It is suggested that, for effective benchmarking, individual training statistics of novices are compared with the statistics of an expert surgeon. The speed–accuracy functions of novices trained in a large number of experimental sessions reveal differences in individual speed–precision strategies, and clarify why such strategies should be automatically detected and controlled for before further training on specific surgical task models, or clinical models, may be envisaged. How expert benchmark statistics may be exploited for automatic performance control is explained. Full article
(This article belongs to the Special Issue eHealth and Artificial Intelligence)
Figures

Graphical abstract

Open AccessArticle Empirical Study on the Factors Influencing Process Innovation When Adopting Intelligent Robots at Small- and Medium-Sized Enterprises—The Role of Organizational Supports
Information 2018, 9(12), 315; https://doi.org/10.3390/info9120315
Received: 27 November 2018 / Revised: 6 December 2018 / Accepted: 6 December 2018 / Published: 8 December 2018
Viewed by 570 | PDF Full-text (530 KB) | HTML Full-text | XML Full-text
Abstract
Robot technology at small- and medium-sized enterprises has become a crucial part of current business operations. Beginning with the manufacturing industry, more industries than ever before have recently begun making use of robot technology to increase operational efficiency and productivity. However, prior studies [...] Read more.
Robot technology at small- and medium-sized enterprises has become a crucial part of current business operations. Beginning with the manufacturing industry, more industries than ever before have recently begun making use of robot technology to increase operational efficiency and productivity. However, prior studies regarding innovation related to intelligent robot use have been limited to developing strategies for describing robot technologies in general. Therefore, we developed a research model for investigating process innovation as it relates to intelligent robots. Based on the literature, two variables of technology benefits (direct usefulness and indirect usefulness) and two constructs of environmental pressure (industry and government) were incorporated into the research model as key determinants of a firm’s process innovation. Furthermore, organizational supports as moderating variables were added to the relationship between technology benefits and process innovation. We collected 257 responses in managerial position at various firms in order to test the proposed hypotheses using structural equation modeling in the statistical software (AMOS 22.0). The results revealed that all variables have a significant impact on process innovation, as well as the moderator. The findings of this study provide theoretical and practical implications for process innovation based on intelligent robot technology. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessArticle A Quick Algorithm for Binary Discernibility Matrix Simplification using Deterministic Finite Automata
Information 2018, 9(12), 314; https://doi.org/10.3390/info9120314
Received: 30 October 2018 / Revised: 3 December 2018 / Accepted: 6 December 2018 / Published: 7 December 2018
Cited by 1 | Viewed by 563 | PDF Full-text (421 KB) | HTML Full-text | XML Full-text
Abstract
The binary discernibility matrix, originally introduced by Felix and Ushio, is a binary matrix representation for storing discernible attributes that can distinguish different objects in decision systems. It is an effective approach for feature selection, knowledge representation and uncertainty reasoning. An original binary [...] Read more.
The binary discernibility matrix, originally introduced by Felix and Ushio, is a binary matrix representation for storing discernible attributes that can distinguish different objects in decision systems. It is an effective approach for feature selection, knowledge representation and uncertainty reasoning. An original binary discernibility matrix usually contains redundant objects and attributes. These redundant objects and attributes may deteriorate the performance of feature selection and knowledge acquisition. To overcome this shortcoming, row relations and column relations in a binary discernibility matrix are defined in this paper. To compare the relationships of different rows (columns) quickly, we construct deterministic finite automata for a binary discernibility matrix. On this basis, a quick algorithm for binary discernibility matrix simplification using deterministic finite automata (BDMSDFA) is proposed. We make a comparison of BDMR (an algorithm of binary discernibility matrix reduction), IBDMR (an improved algorithm of binary discernibility matrix reduction) and BDMSDFA. Finally, theoretical analyses and experimental results indicate that the algorithm of BDMSDFA is effective and efficient. Full article
Figures

Figure 1

Open AccessArticle Low-Complexity Synchronization Scheme with Low-Resolution ADCs
Information 2018, 9(12), 313; https://doi.org/10.3390/info9120313
Received: 27 October 2018 / Revised: 23 November 2018 / Accepted: 6 December 2018 / Published: 7 December 2018
Viewed by 455 | PDF Full-text (2126 KB) | HTML Full-text | XML Full-text
Abstract
An important function of next-generation (5G) and beyond mobile communication systems is aim to provide thousand-fold capacity growth and to support high-speed data transmission up to several megabits per second. However, the research community and industries have to face a dilemma of power [...] Read more.
An important function of next-generation (5G) and beyond mobile communication systems is aim to provide thousand-fold capacity growth and to support high-speed data transmission up to several megabits per second. However, the research community and industries have to face a dilemma of power consumption and hardware design to satisfy the increasing communication requirements. For the purpose of improving the system cost, power consumption, and implementation complexity, a novel scheme of symbol timing and frequency offset estimation with low-resolution analog-to-digital converters (ADCs) based on an orthogonal frequency division multiplexing ultra-wideband (OFDM-UWB) system is proposed in this paper. In our work, we first verified the principle that the autocorrelation of the pseudo-noise (PN) sequences was not affected by low-resolution quantization. With the help of this property, the timing synchronization could be strongly implemented against the influence of low-resolution quantization. Then, the transmitted signal structure and low-resolution quantization scheme under the synchronization scheme were designed. Finally, a frequency offset estimation model with one-bit timing synchronization was established. Theoretical analysis and simulation results corroborate that the performance of the proposed scheme not only approximates to that of the full-resolution synchronization scheme, but also has lower power consumption and computational complexity. Full article
Figures

Figure 1

Open AccessArticle A Fuzzy EWMA Attribute Control Chart to Monitor Process Mean
Information 2018, 9(12), 312; https://doi.org/10.3390/info9120312
Received: 30 September 2018 / Revised: 18 November 2018 / Accepted: 4 December 2018 / Published: 7 December 2018
Viewed by 527 | PDF Full-text (499 KB) | HTML Full-text | XML Full-text
Abstract
Conventional control charts are one of the most important techniques in statistical process control which are used to assess the performance of processes to see whether they are in- or out-of-control. As traditional control charts deal with crisp data, they are not suitable [...] Read more.
Conventional control charts are one of the most important techniques in statistical process control which are used to assess the performance of processes to see whether they are in- or out-of-control. As traditional control charts deal with crisp data, they are not suitable to study unclear, vague, and fuzzy data. In many real-world applications, however, the data to be used in a control charting method are not crisp since they are approximated due to environmental uncertainties and systematic ambiguities involved in the systems under investigation. In these situations, fuzzy numbers and linguistic variables are used to grab such uncertainties. That is why the use of a fuzzy control chart, in which fuzzy data are used, is justified. As an exponentially weighted moving average (EWMA) scheme is usually used to detect small shifts, in this paper a fuzzy EWMA (F-EWMA) control chart is proposed to detect small shifts in the process mean when fuzzy data are available. The application of the newly developed fuzzy control chart is illustrated using real-life data. Full article
Figures

Figure 1

Open AccessArticle Accident Prediction System Based on Hidden Markov Model for Vehicular Ad-Hoc Network in Urban Environments
Information 2018, 9(12), 311; https://doi.org/10.3390/info9120311
Received: 22 October 2018 / Revised: 24 November 2018 / Accepted: 5 December 2018 / Published: 7 December 2018
Viewed by 584 | PDF Full-text (4590 KB) | HTML Full-text | XML Full-text
Abstract
With the emergence of autonomous vehicles and internet of vehicles (IoV), future roads of smart cities will have a combination of autonomous and automated vehicles with regular vehicles that require human operators. To ensure the safety of the road commuters in such a [...] Read more.
With the emergence of autonomous vehicles and internet of vehicles (IoV), future roads of smart cities will have a combination of autonomous and automated vehicles with regular vehicles that require human operators. To ensure the safety of the road commuters in such a network, it is imperative to enhance the performance of Advanced Driver Assistance Systems (ADAS). Real-time driving risk prediction is a fundamental part of an ADAS. Many driving risk prediction systems have been proposed. However, most of them are based only on vehicle’s velocity. But in most of the accident scenarios, other factors are also involved, such as weather conditions or driver fatigue. In this paper, we proposed an accident prediction system for Vehicular ad hoc networks (VANETs) in urban environments, in which we considered the crash risk as a latent variable that can be observed using multi-observation such as velocity, weather condition, risk location, nearby vehicles density and driver fatigue. A Hidden Markov Model (HMM) was used to model the correlation between these observations and the latent variable. Simulation results showed that the proposed system has a better performance in terms of sensitivity and precision compared to state of the art single factor schemes. Full article
(This article belongs to the Special Issue Vehicular Networks and Applications)
Figures

Figure 1

Open AccessArticle Integration of Web APIs and Linked Data Using SPARQL Micro-Services—Application to Biodiversity Use Cases
Information 2018, 9(12), 310; https://doi.org/10.3390/info9120310
Received: 9 November 2018 / Revised: 3 December 2018 / Accepted: 3 December 2018 / Published: 6 December 2018
Cited by 1 | Viewed by 675 | PDF Full-text (1555 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, Web APIs have become a de facto standard for exchanging machine-readable data on the Web. Despite this success, however, they often fail in making resource descriptions interoperable due to the fact that they rely on proprietary vocabularies that lack formal [...] Read more.
In recent years, Web APIs have become a de facto standard for exchanging machine-readable data on the Web. Despite this success, however, they often fail in making resource descriptions interoperable due to the fact that they rely on proprietary vocabularies that lack formal semantics. The Linked Data principles similarly seek the massive publication of data on the Web, yet with the specific goal of ensuring semantic interoperability. Given their complementary goals, it is commonly admitted that cross-fertilization could stem from the automatic combination of Linked Data and Web APIs. Towards this goal, in this paper we leverage the micro-service architectural principles to define a SPARQL Micro-Service architecture, aimed at querying Web APIs using SPARQL. A SPARQL micro-service is a lightweight SPARQL endpoint that provides access to a small, resource-centric, virtual graph. In this context, we argue that full SPARQL Query expressiveness can be supported efficiently without jeopardizing servers availability. Furthermore, we demonstrate how this architecture can be used to dynamically assign dereferenceable URIs to Web API resources that do not have URIs beforehand, thus literally “bringing” Web APIs into the Web of Data. We believe that the emergence of an ecosystem of SPARQL micro-services published by independent providers would enable Linked Data-based applications to easily glean pieces of data from a wealth of distributed, scalable, and reliable services. We describe a working prototype implementation and we finally illustrate the use of SPARQL micro-services in the context of two real-life use cases related to the biodiversity domain, developed in collaboration with the French National Museum of Natural History. Full article
(This article belongs to the Special Issue Semantics for Big Data Integration)
Figures

Figure 1

Open AccessArticle Pareidolic and Uncomplex Technological Singularity
Information 2018, 9(12), 309; https://doi.org/10.3390/info9120309
Received: 25 October 2018 / Revised: 30 November 2018 / Accepted: 3 December 2018 / Published: 6 December 2018
Viewed by 508 | PDF Full-text (337 KB) | HTML Full-text | XML Full-text
Abstract
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the [...] Read more.
“Technological Singularity” (TS), “Accelerated Change” (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies’ themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on patternicity (“the tendency to find patterns in meaningless noise”), a discussion about the perverse power of apophenia (“the tendency to perceive a connection or meaningful pattern between unrelated or random things (such as objects or ideas)”) and pereidolia (“the tendency to perceive a specific, often meaningful image in a random or ambiguous visual pattern”) in those studies is the starting point for two claims: the “accelerated change” is a future-related apophenia case, whereas AGI (and TS) are future-related pareidolia cases. A short presentation of research-focused social networks working to solve complex problems reveals the superiority of human networked minds over the hardware‒software systems and suggests the opportunity for a network-based study of TS (and AGI) from a complexity perspective. It could compensate for the weaknesses of approaches deployed from a linear and predictable perspective, in order to try to redesign our intelligent artifacts. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle An Image Enhancement Method Based on Non-Subsampled Shearlet Transform and Directional Information Measurement
Information 2018, 9(12), 308; https://doi.org/10.3390/info9120308
Received: 16 October 2018 / Revised: 29 November 2018 / Accepted: 3 December 2018 / Published: 6 December 2018
Viewed by 456 | PDF Full-text (4684 KB) | HTML Full-text | XML Full-text
Abstract
Based on the advantages of a non-subsampled shearlet transform (NSST) in image processing and the characteristics of remote sensing imagery, NSST was applied to enhance blurred images. In the NSST transform domain, directional information measurement can highlight textural features of an image edge [...] Read more.
Based on the advantages of a non-subsampled shearlet transform (NSST) in image processing and the characteristics of remote sensing imagery, NSST was applied to enhance blurred images. In the NSST transform domain, directional information measurement can highlight textural features of an image edge and reduce image noise. Therefore, NSST was applied to the detailed enhancement of high-frequency sub-band coefficients. Based on the characteristics of a low-frequency image, the retinex method was used to enhance low-frequency images. Then, an NSST inverse transformation was performed on the enhanced low- and high-frequency coefficients to obtain an enhanced image. Computer simulation experiments showed that when compared with a traditional image enhancement strategy, the method proposed in this paper can enrich the details of the image and enhance the visual effect of the image. Compared with other algorithms listed in this paper, the brightness, contrast, edge strength, and information entropy of the enhanced image by this method are improved. In addition, in the experiment of noisy images, various objective evaluation indices show that the method in this paper enhances the image with the least noise information, which further indicates that the method can suppress noise while improving the image quality, and has a certain level of effectiveness and practicability. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Improving the Accuracy in Sentiment Classification in the Light of Modelling the Latent Semantic Relations
Information 2018, 9(12), 307; https://doi.org/10.3390/info9120307
Received: 15 October 2018 / Revised: 19 November 2018 / Accepted: 28 November 2018 / Published: 4 December 2018
Viewed by 591 | PDF Full-text (908 KB) | HTML Full-text | XML Full-text
Abstract
The research presents the methodology of improving the accuracy in sentiment classification in the light of modelling the latent semantic relations (LSR). The objective of this methodology is to find ways of eliminating the limitations of the discriminant and probabilistic methods for LSR [...] Read more.
The research presents the methodology of improving the accuracy in sentiment classification in the light of modelling the latent semantic relations (LSR). The objective of this methodology is to find ways of eliminating the limitations of the discriminant and probabilistic methods for LSR revealing and customizing the sentiment classification process (SCP) to the more accurate recognition of text tonality. This objective was achieved by providing the possibility of the joint usage of the following methods: (1) retrieval and recognition of the hierarchical semantic structure of the text and (2) development of the hierarchical contextually-oriented sentiment dictionary in order to perform the context-sensitive SCP. The main scientific contribution of this research is the set of the following approaches: at the phase of LSR revealing (1) combination of the discriminant and probabilistic models while applying the rules of adjustments to obtain the final joint result; at all SCP phases (2) considering document as a complex structure of topically completed textual components (paragraphs) and (3) taking into account the features of persuasive documents’ type. The experimental results have demonstrated the enhancement of the SCP accuracy, namely significant increase of average values of recall and precision indicators and guarantee of sufficient accuracy level. Full article
(This article belongs to the Special Issue Knowledge Engineering and Semantic Web)
Figures

Figure 1

Open AccessArticle Social Customer Relationship Management and Organizational Characteristics
Information 2018, 9(12), 306; https://doi.org/10.3390/info9120306
Received: 2 November 2018 / Revised: 23 November 2018 / Accepted: 29 November 2018 / Published: 2 December 2018
Viewed by 643 | PDF Full-text (702 KB) | HTML Full-text | XML Full-text
Abstract
Social customer relationship management (SCRM) is a new philosophy influencing the relationship between customer and organization where the customer gets the opportunity to control the relationship through social media. This paper aims to identify (a) the current level of SCRM and (b) the [...] Read more.
Social customer relationship management (SCRM) is a new philosophy influencing the relationship between customer and organization where the customer gets the opportunity to control the relationship through social media. This paper aims to identify (a) the current level of SCRM and (b) the influence of basic organizational characteristics on the SCRM level. The data were gathered through a questionnaire distributed to 362 organizations headquartered in the Czech Republic. The questionnaire comprised 54 questions focusing on the significance of marketing and CRM practices, establishing a relationship with the customer, online communities, the use of social media in marketing, and acquiring and managing information. Scalable questions with a typical five-level Likert scale were applied in the questionnaire. The results show that larger firms more often set up their own online communities and manage them strategically; moreover, they are able to manage information better. Contrariwise, small-sized organizations use social networks as a way to establish communication with the customer more than large-sized entities. The use of social media for marketing purposes is significantly higher in organizations oriented to consumer markets than in those oriented to business markets. Full article
Figures

Figure 1

Open AccessEditorial Dark-Web Cyber Threat Intelligence: From Data to Intelligence to Prediction
Information 2018, 9(12), 305; https://doi.org/10.3390/info9120305
Received: 29 November 2018 / Accepted: 29 November 2018 / Published: 1 December 2018
Viewed by 576 | PDF Full-text (132 KB) | HTML Full-text | XML Full-text
Abstract
Scientific work that leverages information about communities on the deep and dark web has opened up new angles in the field of security informatics. [...] Full article
(This article belongs to the Special Issue Darkweb Cyber Threat Intelligence Mining)
Open AccessArticle Towards the Representation of Etymological Data on the Semantic Web
Information 2018, 9(12), 304; https://doi.org/10.3390/info9120304
Received: 15 September 2018 / Revised: 31 October 2018 / Accepted: 12 November 2018 / Published: 30 November 2018
Cited by 1 | Viewed by 540 | PDF Full-text (1880 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we look at the potential for a wide-coverage modelling of etymological information as linked data using the Resource Data Framework (RDF) data model. We begin with a discussion of some of the most typical features of etymological data and the [...] Read more.
In this article, we look at the potential for a wide-coverage modelling of etymological information as linked data using the Resource Data Framework (RDF) data model. We begin with a discussion of some of the most typical features of etymological data and the challenges that these might pose to an RDF-based modelling. We then propose a new vocabulary for representing etymological data, the Ontolex-lemon Etymological Extension (lemonETY), based on the ontolex-lemon model. Each of the main elements of our new model is motivated with reference to the preceding discussion. Full article
(This article belongs to the Special Issue Towards the Multilingual Web of Data)
Figures

Figure 1

Open AccessArticle Evaluating User Behaviour in a Cooperative Environment
Information 2018, 9(12), 303; https://doi.org/10.3390/info9120303
Received: 15 October 2018 / Revised: 25 November 2018 / Accepted: 27 November 2018 / Published: 30 November 2018
Cited by 1 | Viewed by 463 | PDF Full-text (597 KB) | HTML Full-text | XML Full-text
Abstract
Big Data, as a new paradigm, has forced both researchers and industries to rethink data management techniques which has become inadequate in many contexts. Indeed, we deal everyday with huge amounts of collected data about user suggestions and searches. These data require new [...] Read more.
Big Data, as a new paradigm, has forced both researchers and industries to rethink data management techniques which has become inadequate in many contexts. Indeed, we deal everyday with huge amounts of collected data about user suggestions and searches. These data require new advanced analysis strategies to be devised in order to profitably leverage this information. Moreover, due to the heterogeneous and fast changing nature of these data, we need to leverage new data storage and management tools to effectively store them. In this paper, we analyze the effect of user searches and suggestions and try to understand how much they influence a user’s social environment. This task is crucial to perform efficient identification of the users that are able to spread their influence across the network. Gathering information about user preferences is a key activity in several scenarios like tourism promotion, personalized marketing, and entertainment suggestions. We show the application of our approach for a huge research project named D-ALL that stands for Data Alliance. In fact, we tried to assess the reaction of users in a competitive environment when they were invited to judge each other. Our results show that the users tend to conform to each other when no tangible rewards are provided while they try to reduce other users’ ratings when it affects getting a tangible prize. Full article
(This article belongs to the Special Issue Advanced Learning Methods for Complex Data)
Figures

Figure 1

Information EISSN 2078-2489 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top