Previous Issue
Volume 10, November

Table of Contents

Information, Volume 10, Issue 12 (December 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The agent-based approach is a well-established methodology to model distributed intelligent [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Information Evolution and Organisations
Information 2019, 10(12), 393; https://doi.org/10.3390/info10120393 - 12 Dec 2019
Viewed by 134
Abstract
In a changing digital world, organisations need to be effective information processing entities, in which people, processes, and technology together gather, process, and deliver the information that the organisation needs. However, like other information processing entities, organisations are subject to the limitations of [...] Read more.
In a changing digital world, organisations need to be effective information processing entities, in which people, processes, and technology together gather, process, and deliver the information that the organisation needs. However, like other information processing entities, organisations are subject to the limitations of information evolution. These limitations are caused by the combinatorial challenges associated with information processing, and by the trade-offs and shortcuts driven by selection pressures. This paper applies the principles of information evolution to organisations and uses them to derive principles about organisation design and organisation change. This analysis shows that information evolution can illuminate some of the seemingly intractable difficulties of organisations, including the effects of organisational silos and the difficulty of organisational change. The derived principles align with and connect different strands of current organisational thinking. In addition, they provide a framework for creating analytical tools to create more detailed organisational insights. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

Open AccessArticle
A Genetic Algorithm-Based Approach for Composite Metamorphic Relations Construction
Information 2019, 10(12), 392; https://doi.org/10.3390/info10120392 - 10 Dec 2019
Viewed by 170
Abstract
The test oracle problem exists widely in modern complex software testing, and metamorphic testing (MT) has become a promising testing technique to alleviate this problem. The inference of efficient metamorphic relations (MRs) is the core problem of metamorphic testing. Studies have proven that [...] Read more.
The test oracle problem exists widely in modern complex software testing, and metamorphic testing (MT) has become a promising testing technique to alleviate this problem. The inference of efficient metamorphic relations (MRs) is the core problem of metamorphic testing. Studies have proven that the combination of simple metamorphic relations can construct more efficient metamorphic relations. In most previous studies, metamorphic relations have been mainly manually inferred by experts with professional knowledge, which is an inefficient technique and hinders the application. In this paper, a genetic algorithm-based approach is proposed to construct composite metamorphic relations automatically for the program to be tested. We use a set of relation sequences to represent a particular class of MRs and turn the problem of inferring composite MRs into a problem of searching for suitable sequences. We then dynamically implement multiple executions of the program and use a genetic algorithm to search for the optimal set of relation sequences. We conducted empirical studies to evaluate our approach using scientific functions in the GNU scientific library (abbreviated as GSL). From the empirical results, our approach can automatically infer high-quality composite MRs, on average, five times more than basic MRs. More importantly, the inferred composite MRs can increase the fault detection capabilities by at least 30 % more than the original metamorphic relations. Full article
Open AccessArticle
Success Factors Importance Based on Software Project Organization Structure
Information 2019, 10(12), 391; https://doi.org/10.3390/info10120391 - 10 Dec 2019
Viewed by 136
Abstract
The main aim of this paper is to identify critical success factors (CSFs) and investigate whether they are the same or not across different project organization structures. The organization structures under the study are: functional, project, and matrix. The study is based on [...] Read more.
The main aim of this paper is to identify critical success factors (CSFs) and investigate whether they are the same or not across different project organization structures. The organization structures under the study are: functional, project, and matrix. The study is based on a survey that was conducted on a large number of software projects in Jordan. To rank success factors (SFs) and identify critical ones, we use the importance index of SFs, which is calculated based on the likelihood and impact across different structures. For deeper analysis, we carry out statistical experiments with an ANOVA test and Spearman’s rank correlation test. Analysis results of an ANOVA test partially indicates that the values of the SF importance index are slightly different across the three organization structures. Moreover, the Spearman’s rank correlation test results show a high degree of correlation of the SF importance index between the function and project organization structures and a low degree of correlation between the function and matrix organization structures. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

Open AccessArticle
Choosing Mutation and Crossover Ratios for Genetic Algorithms—A Review with a New Dynamic Approach
Information 2019, 10(12), 390; https://doi.org/10.3390/info10120390 - 10 Dec 2019
Viewed by 203
Abstract
Genetic algorithm (GA) is an artificial intelligence search method that uses the process of evolution and natural selection theory and is under the umbrella of evolutionary computing algorithm. It is an efficient tool for solving optimization problems. Integration among (GA) parameters is vital [...] Read more.
Genetic algorithm (GA) is an artificial intelligence search method that uses the process of evolution and natural selection theory and is under the umbrella of evolutionary computing algorithm. It is an efficient tool for solving optimization problems. Integration among (GA) parameters is vital for successful (GA) search. Such parameters include mutation and crossover rates in addition to population that are important issues in (GA). However, each operator of GA has a special and different influence. The impact of these factors is influenced by their probabilities; it is difficult to predefine specific ratios for each parameter, particularly, mutation and crossover operators. This paper reviews various methods for choosing mutation and crossover ratios in GAs. Next, we define new deterministic control approaches for crossover and mutation rates, namely Dynamic Decreasing of high mutation ratio/dynamic increasing of low crossover ratio (DHM/ILC), and Dynamic Increasing of Low Mutation/Dynamic Decreasing of High Crossover (ILM/DHC). The dynamic nature of the proposed methods allows the ratios of both crossover and mutation operators to be changed linearly during the search progress, where (DHM/ILC) starts with 100% ratio for mutations, and 0% for crossovers. Both mutation and crossover ratios start to decrease and increase, respectively. By the end of the search process, the ratios will be 0% for mutations and 100% for crossovers. (ILM/DHC) worked the same but the other way around. The proposed approach was compared with two parameters tuning methods (predefined), namely fifty-fifty crossover/mutation ratios, and the most common approach that uses static ratios such as (0.03) mutation rates and (0.9) crossover rates. The experiments were conducted on ten Traveling Salesman Problems (TSP). The experiments showed the effectiveness of the proposed (DHM/ILC) when dealing with small population size, while the proposed (ILM/DHC) was found to be more effective when using large population size. In fact, both proposed dynamic methods outperformed the predefined methods compared in most cases tested. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
A Comprehensive Evaluation of the Community Environment Adaptability for Elderly People Based on the Improved TOPSIS
Information 2019, 10(12), 389; https://doi.org/10.3390/info10120389 - 09 Dec 2019
Viewed by 147
Abstract
As the main way of providing care for elderly people, home-based old-age care puts forward higher requirements for the environmental adaptability of the community. Five communities in Wuhu were selected for a comprehensive assessment of environmental suitability. In order to ensure a comprehensive [...] Read more.
As the main way of providing care for elderly people, home-based old-age care puts forward higher requirements for the environmental adaptability of the community. Five communities in Wuhu were selected for a comprehensive assessment of environmental suitability. In order to ensure a comprehensive and accurate assessment of the environmental adaptability of the community, we used the analytic hierarchy process (AHP) to calculate the weight of each indicator and the technique for order preference by similarity to ideal solution (TOPSIS) method to evaluate the adaptability of community, as well as further analyses using a two-dimensional data space map. The results show that the Weixing community is the most suitable for the elderly and outdoor activities of the community. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
A Fuzzy Technique for On-Line Aggregation of POIs from Social Media: Definition and Comparison with Off-Line Random-Forest Classifiers
Information 2019, 10(12), 388; https://doi.org/10.3390/info10120388 - 07 Dec 2019
Viewed by 226
Abstract
Social media represent an inexhaustible source of information concerning public places (also called points of interest (POIs)), provided by users. Several social media own and publish huge and independently-built corpora of data about public places which are not linked each other. An aggregated [...] Read more.
Social media represent an inexhaustible source of information concerning public places (also called points of interest (POIs)), provided by users. Several social media own and publish huge and independently-built corpora of data about public places which are not linked each other. An aggregated view of information concerning the same public place could be extremely useful, but social media are not immutable sources, thus the off-line approach adopted in all previous research works cannot provide up-to-date information in real time. In this work, we address the problem of on-line aggregating geo-located descriptors of public places provided by social media. The on-line approach makes impossible to adopt machine-learning (classification) techniques, trained on previously gathered data sets. We overcome the problem by adopting an approach based on fuzzy logic: we define a binary fuzzy relation, whose on-line evaluation allows for deciding if two public-place descriptors coming from different social media actually describe the same public place. We tested our technique on three data sets, describing public places in Manchester (UK), Genoa (Italy) and Stuttgart (Germany); the comparison with the off-line classification technique called “random forest” proved that our on-line technique obtains comparable results. Full article
(This article belongs to the Section Information Applications)
Open AccessArticle
A Robust Morpheme Sequence and Convolutional Neural Network-Based Uyghur and Kazakh Short Text Classification
Information 2019, 10(12), 387; https://doi.org/10.3390/info10120387 - 06 Dec 2019
Viewed by 225
Abstract
In this paper, based on the multilingual morphological analyzer, we researched the similar low-resource languages, Uyghur and Kazakh, short text classification. Generally, the online linguistic resources of these languages are noisy. So a preprocessing is necessary and can significantly improve the accuracy. Uyghur [...] Read more.
In this paper, based on the multilingual morphological analyzer, we researched the similar low-resource languages, Uyghur and Kazakh, short text classification. Generally, the online linguistic resources of these languages are noisy. So a preprocessing is necessary and can significantly improve the accuracy. Uyghur and Kazakh are the languages with derivational morphology, in which words are coined by stems concatenated with suffixes. Usually, terms are used as the representation of text content while excluding functional parts as stop words in these languages. By extracting stems we can collect necessary terms and exclude stop words. Morpheme segmentation tool can split text into morphemes with 95% high reliability. After preparing both word- and morpheme-based training text corpora, we apply convolutional neural network (CNN) as a feature selection and text classification algorithm to perform text classification tasks. Experimental results show that the morpheme-based approach outperformed the word-based approach. Word embedding technique is frequently used in text representation both in the framework of neural networks and as a value expression, and can map language units into a sequential vector space based on context, and it is a natural way to extract and predict out-of-vocabulary (OOV) from context information. Multilingual morphological analysis has provided a convenient way for processing tasks of low resource languages like Uyghur and Kazakh. Full article
Open AccessArticle
How Do eHMIs Affect Pedestrians’ Crossing Behavior? A Study Using a Head-Mounted Display Combined with a Motion Suit
Information 2019, 10(12), 386; https://doi.org/10.3390/info10120386 - 06 Dec 2019
Viewed by 270
Abstract
In future traffic, automated vehicles may be equipped with external human-machine interfaces (eHMIs) that can communicate with pedestrians. Previous research suggests that, during first encounters, pedestrians regard text-based eHMIs as clearer than light-based eHMIs. However, in much of the previous research, pedestrians were [...] Read more.
In future traffic, automated vehicles may be equipped with external human-machine interfaces (eHMIs) that can communicate with pedestrians. Previous research suggests that, during first encounters, pedestrians regard text-based eHMIs as clearer than light-based eHMIs. However, in much of the previous research, pedestrians were asked to imagine crossing the road, and unable or not allowed to do so. We investigated the effects of eHMIs on participants’ crossing behavior. Twenty-four participants were immersed in a virtual urban environment using a head-mounted display coupled to a motion-tracking suit. We manipulated the approaching vehicles’ behavior (yielding, nonyielding) and eHMI type (None, Text, Front Brake Lights). Participants could cross the road whenever they felt safe enough to do so. The results showed that forward walking velocities, as recorded at the pelvis, were, on average, higher when an eHMI was present compared to no eHMI when the vehicle yielded. In nonyielding conditions, participants refrained from crossing as indicated by a slight forward and subsequent backward average pelvic motion. An analysis of participants’ thorax angle indicated rotation towards the approaching vehicles and subsequent rotation towards the crossing path. It is concluded that results obtained via a setup in which participants can cross the road are similar to results from survey studies, with eHMIs yielding a higher crossing intention compared to no eHMI. The motion suit allows investigating pedestrian behaviors related to bodily attention and hesitation. Full article
Open AccessArticle
A Method for Road Extraction from High-Resolution Remote Sensing Images Based on Multi-Kernel Learning
Information 2019, 10(12), 385; https://doi.org/10.3390/info10120385 - 06 Dec 2019
Viewed by 192
Abstract
Extracting road from high resolution remote sensing (HRRS) images is an economic and effective way to acquire road information, which has become an important research topic and has a wide range of applications. In this paper, we present a novel method for road [...] Read more.
Extracting road from high resolution remote sensing (HRRS) images is an economic and effective way to acquire road information, which has become an important research topic and has a wide range of applications. In this paper, we present a novel method for road extraction from HRRS images. Multi-kernel learning is first utilized to integrate the spectral, texture, and linear features of images to classify the images into road and non-road groups. A precise extraction method for road elements is then designed by building road shaped indexes to automatically filter out the interference of non-road noises. A series of morphological operations are also carried out to smooth and repair the structure and shape of the road element. Finally, based on the prior knowledge and topological features of the road, a set of penalty factors and a penalty function are constructed to connect road elements to form a complete road network. Experiments are carried out with different sensors, different resolutions, and different scenes to verify the theoretical analysis. Quantitative results prove that the proposed method can optimize the weights of different features, eliminate non-road noises, effectively group road elements, and greatly improve the accuracy of road recognition. Full article
Open AccessArticle
Drivers of Mobile Payment Acceptance in China: An Empirical Investigation
Information 2019, 10(12), 384; https://doi.org/10.3390/info10120384 - 06 Dec 2019
Viewed by 187
Abstract
With the rapid development of mobile technologies in contemporary society, China has seen increased usage of the Internet and mobile devices. Thus, mobile payment is constantly being innovated and is highly valued in China. Although there have been many reports on the consumer [...] Read more.
With the rapid development of mobile technologies in contemporary society, China has seen increased usage of the Internet and mobile devices. Thus, mobile payment is constantly being innovated and is highly valued in China. Although there have been many reports on the consumer adoption of mobile payments, there are few studies providing guidelines on examining mobile payment adoption in China. This study intends to explore the impact of the facilitating factors (perceived transaction convenience, compatibility, relative advantage, social influence), environmental factors (government support, additional value), inhibiting factors (perceived risk), and personal factors (absorptive capacity, affinity, personal innovation in IT (PIIT)) on adoption intention in China. A research model that reflects the characteristics of mobile payment services was developed and empirically tested by using structural equation modeling (SEM) on datasets consisting of 257 users through an online survey questionnaire in China. Our findings show that perceived transaction convenience, compatibility, relative advantage, government support, additional value, absorptive capacity, affinity, and PIIT all have a positive impact on adoption intention, while social influence has no significant impact on adoption intention, and perceived risk has a negative impact on adoption intention. In addition, the top three factors that influence adoption intentions are absorptive capacity, perceived transaction convenience, and additional value. Full article
(This article belongs to the Section Information Applications)
Open AccessArticle
Container Terminal Logistics Generalized Computing Architecture and Green Initiative Computational Pattern Performance Evaluation
Information 2019, 10(12), 383; https://doi.org/10.3390/info10120383 - 05 Dec 2019
Viewed by 225
Abstract
Container terminals are the typical representatives of complex supply chain logistics hubs with multiple compound attributes and multiple coupling constraints, and their operations are provided with the strong characteristics of dynamicity, nonlinearity, coupling, and complexity (DNCC). From the perspective of computational logistics, we [...] Read more.
Container terminals are the typical representatives of complex supply chain logistics hubs with multiple compound attributes and multiple coupling constraints, and their operations are provided with the strong characteristics of dynamicity, nonlinearity, coupling, and complexity (DNCC). From the perspective of computational logistics, we propose the container terminal logistics generalized computing architecture (CTL-GCA) by the migration, integration, and fusion of the abstract hierarchy, design philosophy, execution mechanism, and automatic principles of computer organization, computing architecture, and operating system. The CTL-GCA is supposed to provide the problem-oriented exploration and exploitation elementary frameworks for the abstraction, automation, and analysis of green production at container terminals. The CTL-GCA is intended to construct, evaluate, and improve the solution to planning, scheduling, and decision at container terminals, which all are nondeterministic polynomial hard problems. Subsequently, the logistics generalized computational pattern recognition and performance evaluation of a practical container terminal service case study is launched by the qualitative and quantitative approach from the sustainable perspective of green production. The case study demonstrates the application, utilization, exploitation, and exploration of CTL-GCA preliminarily, and finds the unsustainable patterns of production at the container terminal. From the above, we can draw the following conclusions. For one thing, the CTL-GCA makes a definition of the abstract and automatic running architecture of logistics generalized computation for container terminals (LGC-CT), which provides an original framework for the design and implementation of control and decision mechanism and algorithm. For another, the CTL-GCA can help us to investigate the roots of DNCC thoroughly, and then the CTL-GCA makes for conducting the efficient and sustainable running pattern recognition of LGC-CT. It is supposed to provide a favorable guidance and supplement to define, design, and implement the agile, efficient, sustainable, and robust task scheduling and resource allocation for container terminals by computational logistics whether in the strategy level or the tactical one. Full article
Show Figures

Figure 1

Open AccessArticle
Distance-To-Mean Continuous Conditional Random Fields: Case Study in Traffic Congestion
Information 2019, 10(12), 382; https://doi.org/10.3390/info10120382 - 04 Dec 2019
Viewed by 259
Abstract
Traffic prediction techniques are classified as having parametric, non-parametric, and a combination of parametric and non-parametric characteristics. The extreme learning machine (ELM) is a non-parametric technique that is commonly used to enhance traffic prediction problems. In this study, a modified probability approach, continuous [...] Read more.
Traffic prediction techniques are classified as having parametric, non-parametric, and a combination of parametric and non-parametric characteristics. The extreme learning machine (ELM) is a non-parametric technique that is commonly used to enhance traffic prediction problems. In this study, a modified probability approach, continuous conditional random fields (CCRF), is proposed and implemented with the ELM and then utilized to assess highway traffic data. The modification is conducted to improve the performance of non-parametric techniques, in this case, the ELM method. This proposed method is then called the distance-to-mean continuous conditional random fields (DM-CCRF). The experimental results show that the proposed technique suppresses the prediction error of the prediction model compared to the standard CCRF. The comparison between ELM as a baseline regressor, the standard CCRF, and the modified CCRF is displayed. The performance evaluation of the techniques is obtained by analyzing their mean absolute percentage error (MAPE) values. DM-CCRF is able to suppress the prediction model error to ~17.047%, which is twice as good as that of the standard CCRF method. Based on the attributes of the dataset, the DM-CCRF method is better for the prediction of highway traffic than the standard CCRF method and the baseline regressor. Full article
Open AccessArticle
Cooperative Smartphone Relay Selection Based on Fair Power Utilization for Network Coverage Extension
Information 2019, 10(12), 381; https://doi.org/10.3390/info10120381 - 03 Dec 2019
Viewed by 218
Abstract
This paper presents a relay selection algorithm based on fair battery power utilization for extending mobile network coverage and capacity by using a cooperative communication strategy where mobile devices can be utilized as relays. Cooperation improves the network performance for mobile terminals, either [...] Read more.
This paper presents a relay selection algorithm based on fair battery power utilization for extending mobile network coverage and capacity by using a cooperative communication strategy where mobile devices can be utilized as relays. Cooperation improves the network performance for mobile terminals, either by providing access to out-of-range devices or by facilitating multi-path network access to connected devices. In this work, we assume that all mobile devices can benefit from using other mobile devices as relays and investigate the fairness of relay selection algorithms. We point out that signal strength based relay selection inevitably leads to unfair relay selection and devise a new algorithm that is based on fair utilization of power resources on mobile devices. We call this algorithm Credit based Fair Relay Selection (CF-RS) and in this paper show through simulation that the algorithm results in fair battery power utilization, while providing similar data rates compared with traditional approaches. We then extend the solution to demonstrate that adding incentives for relay operation adds clear value for mobile devices in the case they require relay service. Typically, mobile devices represent self-interested users who are reluctant to cooperate with other network users, mainly due to the cost in terms of power and network capacity. In this paper, we present an incentive based solution which provides clear mutual benefit for mobile devices and demonstrate this benefit in the simulation of symmetric and asymmetric network topologies. The CF-RS algorithm achieves the same performance in terms of achievable data rate, Jain’s fairness index and utility of end devices in both symmetric and asymmetric network configurations. Full article
(This article belongs to the Special Issue Emerging Topics in Wireless Communications for Future Smart Cities)
Show Figures

Figure 1

Open AccessArticle
Boosting Customer E-Loyalty: An Extended Scale of Online Service Quality
Information 2019, 10(12), 380; https://doi.org/10.3390/info10120380 - 03 Dec 2019
Viewed by 275
Abstract
The Customer trust, satisfaction and loyalty with regard to the provision of e-commerce services is expected to be critical factors for the assessment of the success of online businesses. Service quality and high-quality product settings are closely linked to these factors. However, despite [...] Read more.
The Customer trust, satisfaction and loyalty with regard to the provision of e-commerce services is expected to be critical factors for the assessment of the success of online businesses. Service quality and high-quality product settings are closely linked to these factors. However, despite the rapid advancement of e-commerce applications, especially in the context of business to consumer (B2C), prior research has confirmed that e-retailers face difficulties when it comes to maintaining customer loyalty. Several e-service quality frameworks have been employed to boost service quality by targeting customer loyalty. Among these prominent frameworks is the scale of online etail quality (eTailQ). This scale has been under criticism as it was developed before the emergence of Web 2.0 technologies. Consequently, this paper aims to fill this gap by offering empirically-tested and conceptually-derived measurement model specifications for an extended eTailQ scale. In addition, it investigates the potential effects of the extended scale on e-trust and e-satisfaction, and subsequently e-loyalty. The practical and theoretical implications are highlighted to help businesses to design effective business strategies based on quality in order to achieve enhanced customer loyalty, and to direct future research in the field of e-commerce. Full article
Open AccessArticle
A Mapping Approach to Identify Player Types for Game Recommendations
Information 2019, 10(12), 379; https://doi.org/10.3390/info10120379 - 02 Dec 2019
Viewed by 257
Abstract
As the size of the domestic and international gaming industry gradually grows, various games are undergoing rapid development cycles to compete in the current market. However, selecting and recommending suitable games for users continues to be a challenging problem. Although game recommendation systems [...] Read more.
As the size of the domestic and international gaming industry gradually grows, various games are undergoing rapid development cycles to compete in the current market. However, selecting and recommending suitable games for users continues to be a challenging problem. Although game recommendation systems based on the prior gaming experience of users exist, they are limited owing to the cold start problem. Unlike existing approaches, the current study addressed existing problems by identifying the personality of the user through a personality diagnostic test and mapping the personality to the player type. In addition, an Android app-based prototype was developed that recommends games by mapping tag information about the user's personality and the game. A set of user experiments were conducted to verify the feasibility of the proposed mapping model and the recommendation prototype. Full article
(This article belongs to the Special Issue Advances in Knowledge Graph and Data Science)
Open AccessArticle
A Computational Study on Fairness of the Tendermint Blockchain Protocol
Information 2019, 10(12), 378; https://doi.org/10.3390/info10120378 - 30 Nov 2019
Viewed by 287
Abstract
Fairness is a crucial property for blockchain systems since it affects the participation: the ones that find the system fair tend to stay or enter, the ones that find the system unfair tend to leave. While current literature mainly focuses on fairness for [...] Read more.
Fairness is a crucial property for blockchain systems since it affects the participation: the ones that find the system fair tend to stay or enter, the ones that find the system unfair tend to leave. While current literature mainly focuses on fairness for Bitcoin-like blockchains, little has been done to analyze Tendermint. Tendermint is a blockchain technology that uses a committee-based consensus algorithm, which finds an agreement among a set of block creators (called validators), even if some are malicious. Validators are regularly selected to the committee based on their investments. When a validator does not have enough asset to invest, it can increase it with the help of participants that delegate their assets to the validators (called delegators). In this paper, we implement the default Tendermint model and a Tendermint model for fairness in a multi-agent blockchain simulator where participants are modeled as rational agents who enter or leave the system based on their utility values. We conducted experiments for both models where agents have different investment strategies and with various numbers of delegators. In the light of our experimental evaluation, we observed that while, for both models, the fairness decreases and the system shrinks in the absence of delegators, the fairness increases, and the system expands for the second model in the presence of delegators. Full article
(This article belongs to the Special Issue Blockchain Technologies for Multi-Agent Systems)
Show Figures

Figure 1

Open AccessArticle
Adaptive Inverse Controller Design Based on the Fuzzy C-Regression Model (FCRM) and Back Propagation (BP) Algorithm
Information 2019, 10(12), 377; https://doi.org/10.3390/info10120377 - 29 Nov 2019
Viewed by 288
Abstract
Establishing an accurate inverse model is a key problem in the design of adaptive inverse controllers. Most real objects have nonlinear characteristics, so mathematical expression of an inverse model cannot be obtained in most situation. A Takagi–Sugeno(T-S)fuzzy model can approximate real objects with [...] Read more.
Establishing an accurate inverse model is a key problem in the design of adaptive inverse controllers. Most real objects have nonlinear characteristics, so mathematical expression of an inverse model cannot be obtained in most situation. A Takagi–Sugeno(T-S)fuzzy model can approximate real objects with high precision, and is often applied in the modeling of nonlinear systems. Since the consequent parameters of T-S fuzzy models are linear expressions, this paper firstly uses a fuzzy c-regression model (FCRM) clustering algorithm to establish inverse fuzzy model. As the least mean square (LMS) algorithm is only used to adjust consequent parameters of the T-S fuzzy model in the process of parameter adjustment, the premise parameters are fixed and unchanged in the process of adjustment. In this paper, the back propagation (BP) algorithm is applied to adjust the premise and consequent parameters of the T-S fuzzy model, simultaneously online. The simulation results show that the error between the system output controlled by proposed adaptive inverse controller and the desired output is smaller, also the system stability can be maintained when the system output has disturbances. Full article
Open AccessArticle
A Global Extraction Method of High Repeatability on Discretized Scale-Space Representations
Information 2019, 10(12), 376; https://doi.org/10.3390/info10120376 - 28 Nov 2019
Viewed by 200
Abstract
This paper presents a novel method to extract local features, which instead of calculating local extrema computes global maxima in a discretized scale-space representation. To avoid interpolating scales on few data points and to achieve perfect rotation invariance, two essential techniques, increasing the [...] Read more.
This paper presents a novel method to extract local features, which instead of calculating local extrema computes global maxima in a discretized scale-space representation. To avoid interpolating scales on few data points and to achieve perfect rotation invariance, two essential techniques, increasing the width of kernels in pixel and utilizing disk-shaped convolution templates, are adopted in this method. Since the size of a convolution template is finite and finite templates can introduce computational error into convolution, we sufficiently discuss this problem and work out an upper bound of the computational error. The upper bound is utilized in the method to ensure that all features obtained are computed under a given tolerance. Besides, the technique of relative threshold to determine features is adopted to reinforce the robustness for the scene of changing illumination. Simulations show that this new method attains high performance of repeatability in various situations including scale change, rotation, blur, JPEG compression, illumination change, and even viewpoint change. Full article
Show Figures

Figure 1

Open AccessArticle
Facial Expression Recognition Based on Random Forest and Convolutional Neural Network
Information 2019, 10(12), 375; https://doi.org/10.3390/info10120375 - 28 Nov 2019
Viewed by 212
Abstract
As an important part of emotion research, facial expression recognition is a necessary requirement in human–machine interface. Generally, a face expression recognition system includes face detection, feature extraction, and feature classification. Although great success has been made by the traditional machine learning methods, [...] Read more.
As an important part of emotion research, facial expression recognition is a necessary requirement in human–machine interface. Generally, a face expression recognition system includes face detection, feature extraction, and feature classification. Although great success has been made by the traditional machine learning methods, most of them have complex computational problems and lack the ability to extract comprehensive and abstract features. Deep learning-based methods can realize a higher recognition rate for facial expressions, but a large number of training samples and tuning parameters are needed, and the hardware requirement is very high. For the above problems, this paper proposes a method combining features that extracted by the convolutional neural network (CNN) with the C4.5 classifier to recognize facial expressions, which not only can address the incompleteness of handcrafted features but also can avoid the high hardware configuration in the deep learning model. Considering some problems of overfitting and weak generalization ability of the single classifier, random forest is applied in this paper. Meanwhile, this paper makes some improvements for C4.5 classifier and the traditional random forest in the process of experiments. A large number of experiments have proved the effectiveness and feasibility of the proposed method. Full article
Show Figures

Figure 1

Open AccessArticle
Text and Data Quality Mining in CRIS
Information 2019, 10(12), 374; https://doi.org/10.3390/info10120374 - 28 Nov 2019
Viewed by 277
Abstract
To provide scientific institutions with comprehensive and well-maintained documentation of their research information in a current research information system (CRIS), they have the best prerequisites for the implementation of text and data mining (TDM) methods. Using TDM helps to better identify and eliminate [...] Read more.
To provide scientific institutions with comprehensive and well-maintained documentation of their research information in a current research information system (CRIS), they have the best prerequisites for the implementation of text and data mining (TDM) methods. Using TDM helps to better identify and eliminate errors, improve the process, develop the business, and make informed decisions. In addition, TDM increases understanding of the data and its context. This not only improves the quality of the data itself, but also the institution’s handling of the data and consequently the analyses. This present paper deploys TDM in CRIS to analyze, quantify, and correct the unstructured data and its quality issues. Bad data leads to increased costs or wrong decisions. Ensuring high data quality is an essential requirement when creating a CRIS project. User acceptance in a CRIS depends, among other things, on data quality. Not only is the objective data quality the decisive criterion, but also the subjective quality that the individual user assigns to the data. Full article
(This article belongs to the Special Issue Quality of Open Data)
Show Figures

Figure 1

Open AccessArticle
Adopting Augmented Reality to Engage Higher Education Students in a Museum University Collection: the Experience at Roma Tre University
Information 2019, 10(12), 373; https://doi.org/10.3390/info10120373 - 28 Nov 2019
Viewed by 220
Abstract
University museums are powerful resource centres in higher education. In this context, the adoption of digital technologies can support personalised learning experience within the university museum. The aim of the present contribution is to present a case study carried out at the Department [...] Read more.
University museums are powerful resource centres in higher education. In this context, the adoption of digital technologies can support personalised learning experience within the university museum. The aim of the present contribution is to present a case study carried out at the Department of Educational Sciences at Roma Tre University with a group of 14 master’s degree students. Students were involved in a 2-h workshop in which they were invited to test augmented reality technology through a web app for Android. At the end of the visit participants were required to fill in a questionnaire with both open-ended and closed-ended questions aimed at investigating their ideas on the exhibition and their critical thinking level. Students appreciated the exhibition, especially its multimodality. Most of the frequent themes identified in open-ended answers are related to critical and visual thinking. Despite the positive overall evaluation, there is still room for improvement, both in terms of technology and educational design. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
The Capacity of Private Information Retrieval from Decentralized Uncoded Caching Databases
Information 2019, 10(12), 372; https://doi.org/10.3390/info10120372 - 28 Nov 2019
Viewed by 231
Abstract
We consider the private information retrieval (PIR) problem from decentralized uncoded caching databases. There are two phases in our problem setting, a caching phase, and a retrieval phase. In the caching phase, a data center containing all the K files, where each file [...] Read more.
We consider the private information retrieval (PIR) problem from decentralized uncoded caching databases. There are two phases in our problem setting, a caching phase, and a retrieval phase. In the caching phase, a data center containing all the K files, where each file is of size L bits, and several databases with storage size constraint μ K L bits exist in the system. Each database independently chooses μ K L bits out of the total K L bits from the data center to cache through the same probability distribution in a decentralized manner. In the retrieval phase, a user (retriever) accesses N databases in addition to the data center, and wishes to retrieve a desired file privately. We characterize the optimal normalized download cost to be D * = n = 1 N + 1 N n 1 μ n 1 ( 1 μ ) N + 1 n 1 + 1 n + + 1 n K 1. We show that uniform and random caching scheme which is originally proposed for decentralized coded caching by Maddah-Ali and Niesen, along with Sun and Jafar retrieval scheme which is originally proposed for PIR from replicated databases surprisingly results in the lowest normalized download cost. This is the decentralized counterpart of the recent result of Attia, Kumar, and Tandon for the centralized case. The converse proof contains several ingredients such as interference lower bound, induction lemma, replacing queries and answering string random variables with the content of distributed databases, the nature of decentralized uncoded caching databases, and bit marginalization of joint caching distributions. Full article
(This article belongs to the Special Issue Private Information Retrieval: Techniques and Applications)
Show Figures

Figure 1

Open AccessArticle
Design Thinking: Challenges for Software Requirements Elicitation
Information 2019, 10(12), 371; https://doi.org/10.3390/info10120371 - 28 Nov 2019
Viewed by 211
Abstract
Agile methods fit well for software development teams in the requirements elicitation activities. It has brought challenges to organizations in adopting the existing traditional methods, as well as new ones. Design Thinking has been used as a requirements elicitation technique and immersion in [...] Read more.
Agile methods fit well for software development teams in the requirements elicitation activities. It has brought challenges to organizations in adopting the existing traditional methods, as well as new ones. Design Thinking has been used as a requirements elicitation technique and immersion in the process areas, which brings the client closer to the software project team and enables the creation of better projects. With the use of data triangulation, this paper brings a literature review that collected the challenges in software requirements elicitation in agile methodologies and the use of Design Thinking. The result gave way to a case study in a Brazilian public organization project, via user workshop questionnaire with 20 items, applied during the study, in order to identify the practice of Design Thinking in this context. We propose here an overview of 13 studied challenges, from which eight presented strong evidence of contribution (stakeholders involvement, requirements definition and validation, schedule, planning, requirement details and prioritization, and interdependence), three presented partial evidence of contribution and two were not eligible for conclusions (non-functional requirements, use of artifacts, and change of requirements). The main output of this work is to present an analysis of the use of Design Thinking to see if it fits properly to be used as a means of solving the challenges of elicitation of software requirements when using agile methods. Full article
Show Figures

Figure 1

Open AccessArticle
An Optimization Model for Demand-Responsive Feeder Transit Services Based on Ride-Sharing Car
Information 2019, 10(12), 370; https://doi.org/10.3390/info10120370 - 26 Nov 2019
Viewed by 248
Abstract
Ride-sharing (RS) plays an important role in saving energy and alleviating traffic pressure. The vehicles in the demand-responsive feeder transit services (DRT) are generally not ride-sharing cars. Therefore, we proposed an optimal DRT model based on the ride-sharing car, which aimed at assigning [...] Read more.
Ride-sharing (RS) plays an important role in saving energy and alleviating traffic pressure. The vehicles in the demand-responsive feeder transit services (DRT) are generally not ride-sharing cars. Therefore, we proposed an optimal DRT model based on the ride-sharing car, which aimed at assigning a set of vehicles, starting at origin locations and ending at destination locations with their service time windows, to transport passengers of all demand points to the transportation hub (i.e., railway, metro, airport, etc.). The proposed model offered an integrated operation of pedestrian guidance (from unvisited demand points to visited ones) and transit routing (from visited ones to the transportation hub). The objective was to simultaneously minimize weighted passenger walking and riding time. A two-stage heuristic algorithm based on a genetic algorithm (GA) was adopted to solve the problem. The methodology was tested with a case study in Chongqing City, China. The results showed that the model could select optimal pick-up locations and also determine the best pedestrian and route plan. Validation and analysis were also carried out to assess the effect of maximum walking distance and the number of share cars on the model performance, and the difference in quality between the heuristic and optimal solution was also compared. Full article
Open AccessArticle
Some Similarity Measures for Interval-Valued Picture Fuzzy Sets and Their Applications in Decision Making
Information 2019, 10(12), 369; https://doi.org/10.3390/info10120369 - 25 Nov 2019
Viewed by 247
Abstract
Similarity measures, distance measures and entropy measures are some common tools considered to be applied to some interesting real-life phenomena including pattern recognition, decision making, medical diagnosis and clustering. Further, interval-valued picture fuzzy sets (IVPFSs) are effective and useful to describe the fuzzy [...] Read more.
Similarity measures, distance measures and entropy measures are some common tools considered to be applied to some interesting real-life phenomena including pattern recognition, decision making, medical diagnosis and clustering. Further, interval-valued picture fuzzy sets (IVPFSs) are effective and useful to describe the fuzzy information. Therefore, this manuscript aims to develop some similarity measures for IVPFSs due to the significance of describing the membership grades of picture fuzzy set in terms of intervals. Several types cosine similarity measures, cotangent similarity measures, set-theoretic and grey similarity measures, four types of dice similarity measures and generalized dice similarity measures are developed. All the developed similarity measures are validated, and their properties are demonstrated. Two well-known problems, including mineral field recognition problems and multi-attribute decision making problems, are solved using the newly developed similarity measures. The superiorities of developed similarity measures over the similarity measures of picture fuzzy sets, interval-valued intuitionistic fuzzy sets and intuitionistic fuzzy sets are demonstrated through a comparison and numerical examples. Full article
(This article belongs to the Special Issue Big Data Analytics and Computational Intelligence)
Open AccessArticle
Decision Diagram Algorithms to Extract Minimal Cutsets of Finite Degradation Models
Information 2019, 10(12), 368; https://doi.org/10.3390/info10120368 - 25 Nov 2019
Viewed by 257
Abstract
In this article, we propose decision diagram algorithms to extract minimal cutsets of finite degradation models. Finite degradation models generalize and unify combinatorial models used to support probabilistic risk, reliability and safety analyses (fault trees, attack trees, reliability block diagrams…). They formalize a [...] Read more.
In this article, we propose decision diagram algorithms to extract minimal cutsets of finite degradation models. Finite degradation models generalize and unify combinatorial models used to support probabilistic risk, reliability and safety analyses (fault trees, attack trees, reliability block diagrams…). They formalize a key idea underlying all risk assessment methods: states of the models represent levels of degradation of the system under study. Although these states cannot be totally ordered, they have a rich algebraic structure that can be exploited to extract minimal cutsets of models, which represent the most relevant scenarios of failure. The notion of minimal cutsets we introduce here generalizes the one defined for fault trees. We show how algorithms used to calculate minimal cutsets can be lifted up to finite degradation models, thanks to a generic decomposition theorem and an extension of the binary decision diagrams technology. We discuss the implementation and performance issues. Finally, we illustrate the interest of the proposed technology by means of the use case stemmed from the oil and gas industry. Full article
Show Figures

Figure 1

Open AccessArticle
Identification of Insider Trading Using Extreme Gradient Boosting and Multi-Objective Optimization
Information 2019, 10(12), 367; https://doi.org/10.3390/info10120367 - 25 Nov 2019
Viewed by 254
Abstract
Illegal insider trading identification presents a challenging task that attracts great interest from researchers due to the serious harm of insider trading activities to the investors’ confidence and the sustainable development of security markets. In this study, we proposed an identification approach which [...] Read more.
Illegal insider trading identification presents a challenging task that attracts great interest from researchers due to the serious harm of insider trading activities to the investors’ confidence and the sustainable development of security markets. In this study, we proposed an identification approach which integrates XGboost (eXtreme Gradient Boosting) and NSGA-II (Non-dominated Sorting Genetic Algorithm II) for insider trading regulation. First, the insider trading cases that occurred in the Chinese security market were automatically derived, and their relevant indicators were calculated and obtained. Then, the proposed method trained the XGboost model and it employed the NSGA-II for optimizing the parameters of XGboost by using multiple objective functions. Finally, the testing samples were identified using the XGboost with optimized parameters. Its performances were empirically measured by both identification accuracy and efficiency over multiple time window lengths. Results of experiments showed that the proposed approach successfully achieved the best accuracy under the time window length of 90-days, demonstrating that relevant features calculated within the 90-days time window length could be extremely beneficial for insider trading regulation. Additionally, the proposed approach outperformed all benchmark methods in terms of both identification accuracy and efficiency, indicating that it could be used as an alternative approach for insider trading regulation in the Chinese security market. The proposed approach and results in this research is of great significance for market regulators to improve their supervision efficiency and accuracy on illegal insider trading identification. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Open AccessArticle
UXmood—A Sentiment Analysis and Information Visualization Tool to Support the Evaluation of Usability and User Experience
Information 2019, 10(12), 366; https://doi.org/10.3390/info10120366 - 25 Nov 2019
Viewed by 245
Abstract
This paper presents UXmood, a tool that provides quantitative and qualitative information to assist researchers and practitioners in the evaluation of user experience and usability. The tool uses and combines data from video, audio, interaction logs and eye trackers, presenting them in a [...] Read more.
This paper presents UXmood, a tool that provides quantitative and qualitative information to assist researchers and practitioners in the evaluation of user experience and usability. The tool uses and combines data from video, audio, interaction logs and eye trackers, presenting them in a configurable dashboard on the web. The UXmood works analogously to a media player, in which evaluators can review the entire user interaction process, fast-forwarding irrelevant sections and rewinding specific interactions to repeat them if necessary. Besides, sentiment analysis techniques are applied to video, audio and transcribed text content to obtain insights on the user experience of participants. The main motivations to develop UXmood are to support joint analysis of usability and user experience, to use sentiment analysis for supporting qualitative analysis, to synchronize different types of data in the same dashboard and to allow the analysis of user interactions from any device with a web browser. We conducted a user study to assess the data communication efficiency of the visualizations, which provided insights on how to improve the dashboard. Full article
Show Figures

Figure 1

Open AccessArticle
A Novel Low Processing Time System for Criminal Activities Detection Applied to Command and Control Citizen Security Centers
Information 2019, 10(12), 365; https://doi.org/10.3390/info10120365 - 24 Nov 2019
Viewed by 424
Abstract
This paper shows a Novel Low Processing Time System focused on criminal activities detection based on real-time video analysis applied to Command and Control Citizen Security Centers. This system was applied to the detection and classification of criminal events in a real-time video [...] Read more.
This paper shows a Novel Low Processing Time System focused on criminal activities detection based on real-time video analysis applied to Command and Control Citizen Security Centers. This system was applied to the detection and classification of criminal events in a real-time video surveillance subsystem in the Command and Control Citizen Security Center of the Colombian National Police. It was developed using a novel application of Deep Learning, specifically a Faster Region-Based Convolutional Network (R-CNN) for the detection of criminal activities treated as “objects” to be detected in real-time video. In order to maximize the system efficiency and reduce the processing time of each video frame, the pretrained CNN (Convolutional Neural Network) model AlexNet was used and the fine training was carried out with a dataset built for this project, formed by objects commonly used in criminal activities such as short firearms and bladed weapons. In addition, the system was trained for street theft detection. The system can generate alarms when detecting street theft, short firearms and bladed weapons, improving situational awareness and facilitating strategic decision making in the Command and Control Citizen Security Center of the Colombian National Police. Full article
(This article belongs to the Special Issue Advanced Topics in Systems Safety and Security)
Open AccessArticle
A Study on Trend Analysis of Applicants Based on Patent Classification Systems
Information 2019, 10(12), 364; https://doi.org/10.3390/info10120364 - 23 Nov 2019
Viewed by 316
Abstract
In recent times, with the development of science and technology, new technologies have been rapidly emerging, and innovators are making efforts to acquire intellectual property rights to preserve their competitive advantage as well as to enhance innovative competitiveness. As a result, the number [...] Read more.
In recent times, with the development of science and technology, new technologies have been rapidly emerging, and innovators are making efforts to acquire intellectual property rights to preserve their competitive advantage as well as to enhance innovative competitiveness. As a result, the number of patents being acquired increases exponentially every year, and the social and economic ripple effects of developed technologies are also increasing. Now, innovators are focusing on evaluating existing technologies to develop more valuable ones. However, existing patent analysis studies mainly focus on discovering core technologies amongst the technologies derived from patents or analyzing trend changes for specific techniques; the analysis of innovators who develop such core technologies is insufficient. In this paper, we propose a model for analyzing the technical inventions of applicants based on patent classification systems such as international patent classification (IPC) and cooperative patent classification (CPC). Through the proposed model, the common invention patterns of applicants are extracted and used to analyze their technical inventions. The proposed model shows that patent classification systems can be used to extract the trends in applicants’ technological inventions and to track changes in their innovative patterns. Full article
(This article belongs to the Special Issue Advances in Knowledge Graph and Data Science)
Show Figures

Figure 1

Previous Issue
Back to TopTop