Next Issue
Volume 11, January
Previous Issue
Volume 10, November
 
 

Information, Volume 10, Issue 12 (December 2019) – 39 articles

Cover Story (view full-size image): The agent-based approach is a well-established methodology to model distributed intelligent systems. Multi-agent systems (MAS) are boosting applications dealing with safety and information-critical tasks. Therefore, the transparency and trustworthiness of the agents must be enforced. Employing reputation-based mechanisms helps to promote trust in the system. Nevertheless, to fully guarantee MAS with desired accountability and transparency are objectives that are still unmet. This paper elaborates on the notions of trust, the integration of blockchain technologies (BCT) and MAS, and discusses the ethical implications. In particular, we leverage MAS (based on the Java Agent DEvelopment Framework—JADE) and BCT (based on Hyperledger Fabric), tightly coupled to handle interactions, reputation computation, monitoring, and disagreement-management policies. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Grey Wolf Algorithm and Multi-Objective Model for the Manycast RSA Problem in EONs
Information 2019, 10(12), 398; https://doi.org/10.3390/info10120398 - 17 Dec 2019
Viewed by 1073
Abstract
Manycast routing and spectrum assignment (RSA) in elastic optical networks (EONs) has become a hot research field. In this paper, the mathematical model and high efficient algorithm to solve this challenging problem in EONs is investigated. First, a multi-objective optimization model, which minimizes [...] Read more.
Manycast routing and spectrum assignment (RSA) in elastic optical networks (EONs) has become a hot research field. In this paper, the mathematical model and high efficient algorithm to solve this challenging problem in EONs is investigated. First, a multi-objective optimization model, which minimizes network power consumption, the total occupied spectrum, and the maximum index of used frequency spectrum, is established. To handle this multi-objective optimization model, we integrate these three objectives into one by using a weighted sum strategy. To make the population distributed on the search domain uniformly, a uniform design method was developed. Based on this, an improved grey wolf optimization method (IGWO), which was inspired by PSO (Particle Swarm Optimization, PSO) and DE (Differential Evolution, DE), is proposed to solve the maximum model efficiently. To demonstrate high performance of the designed algorithm, a series of experiments are conducted using several different experimental scenes. Experimental results indicate that the proposed algorithm can obtain better results than the compared algorithm. Full article
(This article belongs to the Special Issue New Frontiers for Optimal Control Applications)
Show Figures

Figure 1

Article
Credit Scoring Using Machine Learning by Combing Social Network Information: Evidence from Peer-to-Peer Lending
Information 2019, 10(12), 397; https://doi.org/10.3390/info10120397 - 17 Dec 2019
Cited by 13 | Viewed by 2837
Abstract
Financial institutions use credit scoring to evaluate potential loan default risks. However, insufficient credit information limits the peer-to-peer (P2P) lending platform’s capacity to build effective credit scoring. In recent years, many types of data are used for credit scoring to compensate for the [...] Read more.
Financial institutions use credit scoring to evaluate potential loan default risks. However, insufficient credit information limits the peer-to-peer (P2P) lending platform’s capacity to build effective credit scoring. In recent years, many types of data are used for credit scoring to compensate for the lack of credit history data. Whether social network information can be used to strengthen financial institutions’ predictive power has received much attention in the industry and academia. The aim of this study is to test the reliability of social network information in predicting loan default. We extract borrowers’ social network information from mobile phones and then use logistic regression to test the relationship between social network information and loan default. Three machine learning algorithms—random forest, AdaBoost, and LightGBM—were constructed to demonstrate the predictive performance of social network information. The logistic regression results show that there is a statistically significant correlation between social network information and loan default. The machine learning algorithm results show that social network information can improve loan default prediction performance significantly. The experiment results suggest that social network information is valuable for credit scoring. Full article
Show Figures

Figure 1

Article
Development of an Electrohydraulic Variable Buoyancy System
Information 2019, 10(12), 396; https://doi.org/10.3390/info10120396 - 17 Dec 2019
Viewed by 1087
Abstract
The growing needs in exploring ocean resources have been pushing the length and complexity of autonomous underwater vehicle (AUV) missions, leading to more stringent energy requirements. A promising approach to reduce the energy consumption of AUVs is to use variable buoyancy systems (VBSs) [...] Read more.
The growing needs in exploring ocean resources have been pushing the length and complexity of autonomous underwater vehicle (AUV) missions, leading to more stringent energy requirements. A promising approach to reduce the energy consumption of AUVs is to use variable buoyancy systems (VBSs) as a replacement or complement to thruster action, since VBSs only require energy consumption during limited periods of time to control the vehicle’s floatation. This paper presents the development of an electrohydraulic VBS to be included in an existing AUV for shallow depths of up to 100 m. The device’s preliminary mechanical design is presented, and a mathematical model of the device’s power consumption is developed, based on data provided by the manufacturer. Taking a standard mission profile as an example, a comparison between the energy consumed using thrusters and the designed VBS is presented and compared. Full article
(This article belongs to the Special Issue Online Experimentation and the IoE)
Show Figures

Figure 1

Article
Which Are the Most Influential Cited References in Information?
Information 2019, 10(12), 395; https://doi.org/10.3390/info10120395 - 17 Dec 2019
Cited by 4 | Viewed by 1212
Abstract
This bibliometric study presents the most influential cited references for papers published in the journal Information by using reference publication year spectroscopy (RPYS). A total of 30,960 references cited in 996 papers in the journal Information, published between 2012 and 2019, were [...] Read more.
This bibliometric study presents the most influential cited references for papers published in the journal Information by using reference publication year spectroscopy (RPYS). A total of 30,960 references cited in 996 papers in the journal Information, published between 2012 and 2019, were analyzed in this study. In total, 29 peaks with 48 peak papers are presented and discussed. The most influential cited references are related to set theory and machine learning which is consistent with the scope of the journal. A single peak paper was published in the journal Information. Overall, authors publishing in the journal Information have drawn from many different sources (e.g., journal papers, books, book chapters, and conference proceedings). Full article
(This article belongs to the Special Issue 10th Anniversary of Information—Emerging Research Challenges)
Show Figures

Graphical abstract

Article
An LSTM Model for Predicting Cross-Platform Bursts of Social Media Activity
Information 2019, 10(12), 394; https://doi.org/10.3390/info10120394 - 14 Dec 2019
Cited by 3 | Viewed by 1369
Abstract
Burst analysis and prediction is a fundamental problem in social network analysis, since user activities have been shown to have an intrinsically bursty nature. Bursts may also be a signal of topics that are of growing real-world interest. Since bursts can be caused [...] Read more.
Burst analysis and prediction is a fundamental problem in social network analysis, since user activities have been shown to have an intrinsically bursty nature. Bursts may also be a signal of topics that are of growing real-world interest. Since bursts can be caused by exogenous phenomena and are indicative of burgeoning popularity, leveraging cross platform social media data may be valuable for predicting bursts within a single social media platform. A Long-Short-Term-Memory (LSTM) model is proposed in order to capture the temporal dependencies and associations based upon activity information. The data used to test the model was collected from Twitter, Github, and Reddit. Our results show that the LSTM based model is able to leverage the complex cross-platform dynamics to predict bursts. In situations where information gathering from platforms of concern is not possible the learned model can provide a prediction for whether bursts on another platform can be expected. Full article
(This article belongs to the Special Issue Advances in Social Media Analysis)
Show Figures

Figure 1

Article
Information Evolution and Organisations
Information 2019, 10(12), 393; https://doi.org/10.3390/info10120393 - 12 Dec 2019
Cited by 3 | Viewed by 1569
Abstract
In a changing digital world, organisations need to be effective information processing entities, in which people, processes, and technology together gather, process, and deliver the information that the organisation needs. However, like other information processing entities, organisations are subject to the limitations of [...] Read more.
In a changing digital world, organisations need to be effective information processing entities, in which people, processes, and technology together gather, process, and deliver the information that the organisation needs. However, like other information processing entities, organisations are subject to the limitations of information evolution. These limitations are caused by the combinatorial challenges associated with information processing, and by the trade-offs and shortcuts driven by selection pressures. This paper applies the principles of information evolution to organisations and uses them to derive principles about organisation design and organisation change. This analysis shows that information evolution can illuminate some of the seemingly intractable difficulties of organisations, including the effects of organisational silos and the difficulty of organisational change. The derived principles align with and connect different strands of current organisational thinking. In addition, they provide a framework for creating analytical tools to create more detailed organisational insights. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

Article
A Genetic Algorithm-Based Approach for Composite Metamorphic Relations Construction
Information 2019, 10(12), 392; https://doi.org/10.3390/info10120392 - 10 Dec 2019
Cited by 1 | Viewed by 1371
Abstract
The test oracle problem exists widely in modern complex software testing, and metamorphic testing (MT) has become a promising testing technique to alleviate this problem. The inference of efficient metamorphic relations (MRs) is the core problem of metamorphic testing. Studies have proven that [...] Read more.
The test oracle problem exists widely in modern complex software testing, and metamorphic testing (MT) has become a promising testing technique to alleviate this problem. The inference of efficient metamorphic relations (MRs) is the core problem of metamorphic testing. Studies have proven that the combination of simple metamorphic relations can construct more efficient metamorphic relations. In most previous studies, metamorphic relations have been mainly manually inferred by experts with professional knowledge, which is an inefficient technique and hinders the application. In this paper, a genetic algorithm-based approach is proposed to construct composite metamorphic relations automatically for the program to be tested. We use a set of relation sequences to represent a particular class of MRs and turn the problem of inferring composite MRs into a problem of searching for suitable sequences. We then dynamically implement multiple executions of the program and use a genetic algorithm to search for the optimal set of relation sequences. We conducted empirical studies to evaluate our approach using scientific functions in the GNU scientific library (abbreviated as GSL). From the empirical results, our approach can automatically infer high-quality composite MRs, on average, five times more than basic MRs. More importantly, the inferred composite MRs can increase the fault detection capabilities by at least 30 % more than the original metamorphic relations. Full article
Show Figures

Figure 1

Article
Success Factors Importance Based on Software Project Organization Structure
Information 2019, 10(12), 391; https://doi.org/10.3390/info10120391 - 10 Dec 2019
Cited by 4 | Viewed by 1947
Abstract
The main aim of this paper is to identify critical success factors (CSFs) and investigate whether they are the same or not across different project organization structures. The organization structures under the study are: functional, project, and matrix. The study is based on [...] Read more.
The main aim of this paper is to identify critical success factors (CSFs) and investigate whether they are the same or not across different project organization structures. The organization structures under the study are: functional, project, and matrix. The study is based on a survey that was conducted on a large number of software projects in Jordan. To rank success factors (SFs) and identify critical ones, we use the importance index of SFs, which is calculated based on the likelihood and impact across different structures. For deeper analysis, we carry out statistical experiments with an ANOVA test and Spearman’s rank correlation test. Analysis results of an ANOVA test partially indicates that the values of the SF importance index are slightly different across the three organization structures. Moreover, the Spearman’s rank correlation test results show a high degree of correlation of the SF importance index between the function and project organization structures and a low degree of correlation between the function and matrix organization structures. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

Article
Choosing Mutation and Crossover Ratios for Genetic Algorithms—A Review with a New Dynamic Approach
Information 2019, 10(12), 390; https://doi.org/10.3390/info10120390 - 10 Dec 2019
Cited by 148 | Viewed by 5139
Abstract
Genetic algorithm (GA) is an artificial intelligence search method that uses the process of evolution and natural selection theory and is under the umbrella of evolutionary computing algorithm. It is an efficient tool for solving optimization problems. Integration among (GA) parameters is vital [...] Read more.
Genetic algorithm (GA) is an artificial intelligence search method that uses the process of evolution and natural selection theory and is under the umbrella of evolutionary computing algorithm. It is an efficient tool for solving optimization problems. Integration among (GA) parameters is vital for successful (GA) search. Such parameters include mutation and crossover rates in addition to population that are important issues in (GA). However, each operator of GA has a special and different influence. The impact of these factors is influenced by their probabilities; it is difficult to predefine specific ratios for each parameter, particularly, mutation and crossover operators. This paper reviews various methods for choosing mutation and crossover ratios in GAs. Next, we define new deterministic control approaches for crossover and mutation rates, namely Dynamic Decreasing of high mutation ratio/dynamic increasing of low crossover ratio (DHM/ILC), and Dynamic Increasing of Low Mutation/Dynamic Decreasing of High Crossover (ILM/DHC). The dynamic nature of the proposed methods allows the ratios of both crossover and mutation operators to be changed linearly during the search progress, where (DHM/ILC) starts with 100% ratio for mutations, and 0% for crossovers. Both mutation and crossover ratios start to decrease and increase, respectively. By the end of the search process, the ratios will be 0% for mutations and 100% for crossovers. (ILM/DHC) worked the same but the other way around. The proposed approach was compared with two parameters tuning methods (predefined), namely fifty-fifty crossover/mutation ratios, and the most common approach that uses static ratios such as (0.03) mutation rates and (0.9) crossover rates. The experiments were conducted on ten Traveling Salesman Problems (TSP). The experiments showed the effectiveness of the proposed (DHM/ILC) when dealing with small population size, while the proposed (ILM/DHC) was found to be more effective when using large population size. In fact, both proposed dynamic methods outperformed the predefined methods compared in most cases tested. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Article
A Comprehensive Evaluation of the Community Environment Adaptability for Elderly People Based on the Improved TOPSIS
Information 2019, 10(12), 389; https://doi.org/10.3390/info10120389 - 09 Dec 2019
Cited by 8 | Viewed by 1294
Abstract
As the main way of providing care for elderly people, home-based old-age care puts forward higher requirements for the environmental adaptability of the community. Five communities in Wuhu were selected for a comprehensive assessment of environmental suitability. In order to ensure a comprehensive [...] Read more.
As the main way of providing care for elderly people, home-based old-age care puts forward higher requirements for the environmental adaptability of the community. Five communities in Wuhu were selected for a comprehensive assessment of environmental suitability. In order to ensure a comprehensive and accurate assessment of the environmental adaptability of the community, we used the analytic hierarchy process (AHP) to calculate the weight of each indicator and the technique for order preference by similarity to ideal solution (TOPSIS) method to evaluate the adaptability of community, as well as further analyses using a two-dimensional data space map. The results show that the Weixing community is the most suitable for the elderly and outdoor activities of the community. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Article
A Fuzzy Technique for On-Line Aggregation of POIs from Social Media: Definition and Comparison with Off-Line Random-Forest Classifiers
Information 2019, 10(12), 388; https://doi.org/10.3390/info10120388 - 07 Dec 2019
Cited by 2 | Viewed by 1240
Abstract
Social media represent an inexhaustible source of information concerning public places (also called points of interest (POIs)), provided by users. Several social media own and publish huge and independently-built corpora of data about public places which are not linked each other. An aggregated [...] Read more.
Social media represent an inexhaustible source of information concerning public places (also called points of interest (POIs)), provided by users. Several social media own and publish huge and independently-built corpora of data about public places which are not linked each other. An aggregated view of information concerning the same public place could be extremely useful, but social media are not immutable sources, thus the off-line approach adopted in all previous research works cannot provide up-to-date information in real time. In this work, we address the problem of on-line aggregating geo-located descriptors of public places provided by social media. The on-line approach makes impossible to adopt machine-learning (classification) techniques, trained on previously gathered data sets. We overcome the problem by adopting an approach based on fuzzy logic: we define a binary fuzzy relation, whose on-line evaluation allows for deciding if two public-place descriptors coming from different social media actually describe the same public place. We tested our technique on three data sets, describing public places in Manchester (UK), Genoa (Italy) and Stuttgart (Germany); the comparison with the off-line classification technique called “random forest” proved that our on-line technique obtains comparable results. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Article
A Robust Morpheme Sequence and Convolutional Neural Network-Based Uyghur and Kazakh Short Text Classification
Information 2019, 10(12), 387; https://doi.org/10.3390/info10120387 - 06 Dec 2019
Cited by 6 | Viewed by 1285
Abstract
In this paper, based on the multilingual morphological analyzer, we researched the similar low-resource languages, Uyghur and Kazakh, short text classification. Generally, the online linguistic resources of these languages are noisy. So a preprocessing is necessary and can significantly improve the accuracy. Uyghur [...] Read more.
In this paper, based on the multilingual morphological analyzer, we researched the similar low-resource languages, Uyghur and Kazakh, short text classification. Generally, the online linguistic resources of these languages are noisy. So a preprocessing is necessary and can significantly improve the accuracy. Uyghur and Kazakh are the languages with derivational morphology, in which words are coined by stems concatenated with suffixes. Usually, terms are used as the representation of text content while excluding functional parts as stop words in these languages. By extracting stems we can collect necessary terms and exclude stop words. Morpheme segmentation tool can split text into morphemes with 95% high reliability. After preparing both word- and morpheme-based training text corpora, we apply convolutional neural network (CNN) as a feature selection and text classification algorithm to perform text classification tasks. Experimental results show that the morpheme-based approach outperformed the word-based approach. Word embedding technique is frequently used in text representation both in the framework of neural networks and as a value expression, and can map language units into a sequential vector space based on context, and it is a natural way to extract and predict out-of-vocabulary (OOV) from context information. Multilingual morphological analysis has provided a convenient way for processing tasks of low resource languages like Uyghur and Kazakh. Full article
Show Figures

Figure 1

Article
How Do eHMIs Affect Pedestrians’ Crossing Behavior? A Study Using a Head-Mounted Display Combined with a Motion Suit
Information 2019, 10(12), 386; https://doi.org/10.3390/info10120386 - 06 Dec 2019
Cited by 24 | Viewed by 2551
Abstract
In future trac, automated vehicles may be equipped with external human-machine interfaces (eHMIs) that can communicate with pedestrians. Previous research suggests that, during first encounters, pedestrians regard text-based eHMIs as clearer than light-based eHMIs. However, in much of the previous research, pedestrians were [...] Read more.
In future trac, automated vehicles may be equipped with external human-machine interfaces (eHMIs) that can communicate with pedestrians. Previous research suggests that, during first encounters, pedestrians regard text-based eHMIs as clearer than light-based eHMIs. However, in much of the previous research, pedestrians were asked to imagine crossing the road, and unable or not allowed to do so. We investigated the e ects of eHMIs on participants’ crossing behavior. Twenty-four participants were immersed in a virtual urban environment using a head-mounted display coupled to a motion-tracking suit. We manipulated the approaching vehicles’ behavior (yielding, nonyielding) and eHMI type (None, Text, Front Brake Lights). Participants could cross the road whenever they felt safe enough to do so. The results showed that forward walking velocities, as recorded at the pelvis, were, on average, higher when an eHMI was present compared to no eHMI if the vehicle yielded. In nonyielding conditions, participants showed a slight forward motion and refrained from crossing. An analysis of participants’ thorax angle indicated rotation towards the approaching vehicles and subsequent rotation towards the crossing path. It is concluded that results obtained via a setup in which participants can cross the road are similar to results from survey studies, with eHMIs yielding a higher crossing intention compared to no eHMI. The motion suit allows investigating pedestrian behaviors related to bodily attention and hesitation. Full article
Show Figures

Figure 1

Article
A Method for Road Extraction from High-Resolution Remote Sensing Images Based on Multi-Kernel Learning
Information 2019, 10(12), 385; https://doi.org/10.3390/info10120385 - 06 Dec 2019
Cited by 1 | Viewed by 1124
Abstract
Extracting road from high resolution remote sensing (HRRS) images is an economic and effective way to acquire road information, which has become an important research topic and has a wide range of applications. In this paper, we present a novel method for road [...] Read more.
Extracting road from high resolution remote sensing (HRRS) images is an economic and effective way to acquire road information, which has become an important research topic and has a wide range of applications. In this paper, we present a novel method for road extraction from HRRS images. Multi-kernel learning is first utilized to integrate the spectral, texture, and linear features of images to classify the images into road and non-road groups. A precise extraction method for road elements is then designed by building road shaped indexes to automatically filter out the interference of non-road noises. A series of morphological operations are also carried out to smooth and repair the structure and shape of the road element. Finally, based on the prior knowledge and topological features of the road, a set of penalty factors and a penalty function are constructed to connect road elements to form a complete road network. Experiments are carried out with different sensors, different resolutions, and different scenes to verify the theoretical analysis. Quantitative results prove that the proposed method can optimize the weights of different features, eliminate non-road noises, effectively group road elements, and greatly improve the accuracy of road recognition. Full article
Show Figures

Figure 1

Article
Drivers of Mobile Payment Acceptance in China: An Empirical Investigation
Information 2019, 10(12), 384; https://doi.org/10.3390/info10120384 - 06 Dec 2019
Cited by 9 | Viewed by 3055
Abstract
With the rapid development of mobile technologies in contemporary society, China has seen increased usage of the Internet and mobile devices. Thus, mobile payment is constantly being innovated and is highly valued in China. Although there have been many reports on the consumer [...] Read more.
With the rapid development of mobile technologies in contemporary society, China has seen increased usage of the Internet and mobile devices. Thus, mobile payment is constantly being innovated and is highly valued in China. Although there have been many reports on the consumer adoption of mobile payments, there are few studies providing guidelines on examining mobile payment adoption in China. This study intends to explore the impact of the facilitating factors (perceived transaction convenience, compatibility, relative advantage, social influence), environmental factors (government support, additional value), inhibiting factors (perceived risk), and personal factors (absorptive capacity, affinity, personal innovation in IT (PIIT)) on adoption intention in China. A research model that reflects the characteristics of mobile payment services was developed and empirically tested by using structural equation modeling (SEM) on datasets consisting of 257 users through an online survey questionnaire in China. Our findings show that perceived transaction convenience, compatibility, relative advantage, government support, additional value, absorptive capacity, affinity, and PIIT all have a positive impact on adoption intention, while social influence has no significant impact on adoption intention, and perceived risk has a negative impact on adoption intention. In addition, the top three factors that influence adoption intentions are absorptive capacity, perceived transaction convenience, and additional value. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Article
Container Terminal Logistics Generalized Computing Architecture and Green Initiative Computational Pattern Performance Evaluation
Information 2019, 10(12), 383; https://doi.org/10.3390/info10120383 - 05 Dec 2019
Cited by 3 | Viewed by 1253
Abstract
Container terminals are the typical representatives of complex supply chain logistics hubs with multiple compound attributes and multiple coupling constraints, and their operations are provided with the strong characteristics of dynamicity, nonlinearity, coupling, and complexity (DNCC). From the perspective of computational logistics, we [...] Read more.
Container terminals are the typical representatives of complex supply chain logistics hubs with multiple compound attributes and multiple coupling constraints, and their operations are provided with the strong characteristics of dynamicity, nonlinearity, coupling, and complexity (DNCC). From the perspective of computational logistics, we propose the container terminal logistics generalized computing architecture (CTL-GCA) by the migration, integration, and fusion of the abstract hierarchy, design philosophy, execution mechanism, and automatic principles of computer organization, computing architecture, and operating system. The CTL-GCA is supposed to provide the problem-oriented exploration and exploitation elementary frameworks for the abstraction, automation, and analysis of green production at container terminals. The CTL-GCA is intended to construct, evaluate, and improve the solution to planning, scheduling, and decision at container terminals, which all are nondeterministic polynomial hard problems. Subsequently, the logistics generalized computational pattern recognition and performance evaluation of a practical container terminal service case study is launched by the qualitative and quantitative approach from the sustainable perspective of green production. The case study demonstrates the application, utilization, exploitation, and exploration of CTL-GCA preliminarily, and finds the unsustainable patterns of production at the container terminal. From the above, we can draw the following conclusions. For one thing, the CTL-GCA makes a definition of the abstract and automatic running architecture of logistics generalized computation for container terminals (LGC-CT), which provides an original framework for the design and implementation of control and decision mechanism and algorithm. For another, the CTL-GCA can help us to investigate the roots of DNCC thoroughly, and then the CTL-GCA makes for conducting the efficient and sustainable running pattern recognition of LGC-CT. It is supposed to provide a favorable guidance and supplement to define, design, and implement the agile, efficient, sustainable, and robust task scheduling and resource allocation for container terminals by computational logistics whether in the strategy level or the tactical one. Full article
Show Figures

Figure 1

Article
Distance-To-Mean Continuous Conditional Random Fields: Case Study in Traffic Congestion
Information 2019, 10(12), 382; https://doi.org/10.3390/info10120382 - 04 Dec 2019
Viewed by 1092
Abstract
Traffic prediction techniques are classified as having parametric, non-parametric, and a combination of parametric and non-parametric characteristics. The extreme learning machine (ELM) is a non-parametric technique that is commonly used to enhance traffic prediction problems. In this study, a modified probability approach, continuous [...] Read more.
Traffic prediction techniques are classified as having parametric, non-parametric, and a combination of parametric and non-parametric characteristics. The extreme learning machine (ELM) is a non-parametric technique that is commonly used to enhance traffic prediction problems. In this study, a modified probability approach, continuous conditional random fields (CCRF), is proposed and implemented with the ELM and then utilized to assess highway traffic data. The modification is conducted to improve the performance of non-parametric techniques, in this case, the ELM method. This proposed method is then called the distance-to-mean continuous conditional random fields (DM-CCRF). The experimental results show that the proposed technique suppresses the prediction error of the prediction model compared to the standard CCRF. The comparison between ELM as a baseline regressor, the standard CCRF, and the modified CCRF is displayed. The performance evaluation of the techniques is obtained by analyzing their mean absolute percentage error (MAPE) values. DM-CCRF is able to suppress the prediction model error to ~ 17.047 % , which is twice as good as that of the standard CCRF method. Based on the attributes of the dataset, the DM-CCRF method is better for the prediction of highway traffic than the standard CCRF method and the baseline regressor. Full article
Show Figures

Figure 1

Article
Cooperative Smartphone Relay Selection Based on Fair Power Utilization for Network Coverage Extension
Information 2019, 10(12), 381; https://doi.org/10.3390/info10120381 - 03 Dec 2019
Viewed by 1553
Abstract
This paper presents a relay selection algorithm based on fair battery power utilization for extending mobile network coverage and capacity by using a cooperative communication strategy where mobile devices can be utilized as relays. Cooperation improves the network performance for mobile terminals, either [...] Read more.
This paper presents a relay selection algorithm based on fair battery power utilization for extending mobile network coverage and capacity by using a cooperative communication strategy where mobile devices can be utilized as relays. Cooperation improves the network performance for mobile terminals, either by providing access to out-of-range devices or by facilitating multi-path network access to connected devices. In this work, we assume that all mobile devices can benefit from using other mobile devices as relays and investigate the fairness of relay selection algorithms. We point out that signal strength based relay selection inevitably leads to unfair relay selection and devise a new algorithm that is based on fair utilization of power resources on mobile devices. We call this algorithm Credit based Fair Relay Selection (CF-RS) and in this paper show through simulation that the algorithm results in fair battery power utilization, while providing similar data rates compared with traditional approaches. We then extend the solution to demonstrate that adding incentives for relay operation adds clear value for mobile devices in the case they require relay service. Typically, mobile devices represent self-interested users who are reluctant to cooperate with other network users, mainly due to the cost in terms of power and network capacity. In this paper, we present an incentive based solution which provides clear mutual benefit for mobile devices and demonstrate this benefit in the simulation of symmetric and asymmetric network topologies. The CF-RS algorithm achieves the same performance in terms of achievable data rate, Jain’s fairness index and utility of end devices in both symmetric and asymmetric network configurations. Full article
(This article belongs to the Special Issue Emerging Topics in Wireless Communications for Future Smart Cities)
Show Figures

Figure 1

Article
Boosting Customer E-Loyalty: An Extended Scale of Online Service Quality
Information 2019, 10(12), 380; https://doi.org/10.3390/info10120380 - 03 Dec 2019
Cited by 12 | Viewed by 3128
Abstract
The Customer trust, satisfaction and loyalty with regard to the provision of e-commerce services is expected to be critical factors for the assessment of the success of online businesses. Service quality and high-quality product settings are closely linked to these factors. However, despite [...] Read more.
The Customer trust, satisfaction and loyalty with regard to the provision of e-commerce services is expected to be critical factors for the assessment of the success of online businesses. Service quality and high-quality product settings are closely linked to these factors. However, despite the rapid advancement of e-commerce applications, especially in the context of business to consumer (B2C), prior research has confirmed that e-retailers face difficulties when it comes to maintaining customer loyalty. Several e-service quality frameworks have been employed to boost service quality by targeting customer loyalty. Among these prominent frameworks is the scale of online etail quality (eTailQ). This scale has been under criticism as it was developed before the emergence of Web 2.0 technologies. Consequently, this paper aims to fill this gap by offering empirically-tested and conceptually-derived measurement model specifications for an extended eTailQ scale. In addition, it investigates the potential effects of the extended scale on e-trust and e-satisfaction, and subsequently e-loyalty. The practical and theoretical implications are highlighted to help businesses to design effective business strategies based on quality in order to achieve enhanced customer loyalty, and to direct future research in the field of e-commerce. Full article
Show Figures

Figure 1

Article
A Mapping Approach to Identify Player Types for Game Recommendations
Information 2019, 10(12), 379; https://doi.org/10.3390/info10120379 - 02 Dec 2019
Viewed by 1330
Abstract
As the size of the domestic and international gaming industry gradually grows, various games are undergoing rapid development cycles to compete in the current market. However, selecting and recommending suitable games for users continues to be a challenging problem. Although game recommendation systems [...] Read more.
As the size of the domestic and international gaming industry gradually grows, various games are undergoing rapid development cycles to compete in the current market. However, selecting and recommending suitable games for users continues to be a challenging problem. Although game recommendation systems based on the prior gaming experience of users exist, they are limited owing to the cold start problem. Unlike existing approaches, the current study addressed existing problems by identifying the personality of the user through a personality diagnostic test and mapping the personality to the player type. In addition, an Android app-based prototype was developed that recommends games by mapping tag information about the user’s personality and the game. A set of user experiments were conducted to verify the feasibility of the proposed mapping model and the recommendation prototype. Full article
(This article belongs to the Special Issue Advances in Knowledge Graph and Data Science)
Show Figures

Figure 1

Article
A Computational Study on Fairness of the Tendermint Blockchain Protocol
Information 2019, 10(12), 378; https://doi.org/10.3390/info10120378 - 30 Nov 2019
Cited by 8 | Viewed by 1620
Abstract
Fairness is a crucial property for blockchain systems since it affects the participation: the ones that find the system fair tend to stay or enter, the ones that find the system unfair tend to leave. While current literature mainly focuses on fairness for [...] Read more.
Fairness is a crucial property for blockchain systems since it affects the participation: the ones that find the system fair tend to stay or enter, the ones that find the system unfair tend to leave. While current literature mainly focuses on fairness for Bitcoin-like blockchains, little has been done to analyze Tendermint. Tendermint is a blockchain technology that uses a committee-based consensus algorithm, which finds an agreement among a set of block creators (called validators), even if some are malicious. Validators are regularly selected to the committee based on their investments. When a validator does not have enough asset to invest, it can increase it with the help of participants that delegate their assets to the validators (called delegators). In this paper, we implement the default Tendermint model and a Tendermint model for fairness in a multi-agent blockchain simulator where participants are modeled as rational agents who enter or leave the system based on their utility values. We conducted experiments for both models where agents have different investment strategies and with various numbers of delegators. In the light of our experimental evaluation, we observed that while, for both models, the fairness decreases and the system shrinks in the absence of delegators, the fairness increases, and the system expands for the second model in the presence of delegators. Full article
(This article belongs to the Special Issue Blockchain Technologies for Multi-Agent Systems)
Show Figures

Figure 1

Article
Adaptive Inverse Controller Design Based on the Fuzzy C-Regression Model (FCRM) and Back Propagation (BP) Algorithm
Information 2019, 10(12), 377; https://doi.org/10.3390/info10120377 - 29 Nov 2019
Cited by 1 | Viewed by 1089
Abstract
Establishing an accurate inverse model is a key problem in the design of adaptive inverse controllers. Most real objects have nonlinear characteristics, so mathematical expression of an inverse model cannot be obtained in most situation. A Takagi–Sugeno(T-S)fuzzy model can approximate real objects with [...] Read more.
Establishing an accurate inverse model is a key problem in the design of adaptive inverse controllers. Most real objects have nonlinear characteristics, so mathematical expression of an inverse model cannot be obtained in most situation. A Takagi–Sugeno(T-S)fuzzy model can approximate real objects with high precision, and is often applied in the modeling of nonlinear systems. Since the consequent parameters of T-S fuzzy models are linear expressions, this paper firstly uses a fuzzy c-regression model (FCRM) clustering algorithm to establish inverse fuzzy model. As the least mean square (LMS) algorithm is only used to adjust consequent parameters of the T-S fuzzy model in the process of parameter adjustment, the premise parameters are fixed and unchanged in the process of adjustment. In this paper, the back propagation (BP) algorithm is applied to adjust the premise and consequent parameters of the T-S fuzzy model, simultaneously online. The simulation results show that the error between the system output controlled by proposed adaptive inverse controller and the desired output is smaller, also the system stability can be maintained when the system output has disturbances. Full article
Show Figures

Figure 1

Article
A Global Extraction Method of High Repeatability on Discretized Scale-Space Representations
Information 2019, 10(12), 376; https://doi.org/10.3390/info10120376 - 28 Nov 2019
Viewed by 1125
Abstract
This paper presents a novel method to extract local features, which instead of calculating local extrema computes global maxima in a discretized scale-space representation. To avoid interpolating scales on few data points and to achieve perfect rotation invariance, two essential techniques, increasing the [...] Read more.
This paper presents a novel method to extract local features, which instead of calculating local extrema computes global maxima in a discretized scale-space representation. To avoid interpolating scales on few data points and to achieve perfect rotation invariance, two essential techniques, increasing the width of kernels in pixel and utilizing disk-shaped convolution templates, are adopted in this method. Since the size of a convolution template is finite and finite templates can introduce computational error into convolution, we sufficiently discuss this problem and work out an upper bound of the computational error. The upper bound is utilized in the method to ensure that all features obtained are computed under a given tolerance. Besides, the technique of relative threshold to determine features is adopted to reinforce the robustness for the scene of changing illumination. Simulations show that this new method attains high performance of repeatability in various situations including scale change, rotation, blur, JPEG compression, illumination change, and even viewpoint change. Full article
Show Figures

Figure 1

Article
Facial Expression Recognition Based on Random Forest and Convolutional Neural Network
Information 2019, 10(12), 375; https://doi.org/10.3390/info10120375 - 28 Nov 2019
Cited by 12 | Viewed by 1878
Abstract
As an important part of emotion research, facial expression recognition is a necessary requirement in human–machine interface. Generally, a face expression recognition system includes face detection, feature extraction, and feature classification. Although great success has been made by the traditional machine learning methods, [...] Read more.
As an important part of emotion research, facial expression recognition is a necessary requirement in human–machine interface. Generally, a face expression recognition system includes face detection, feature extraction, and feature classification. Although great success has been made by the traditional machine learning methods, most of them have complex computational problems and lack the ability to extract comprehensive and abstract features. Deep learning-based methods can realize a higher recognition rate for facial expressions, but a large number of training samples and tuning parameters are needed, and the hardware requirement is very high. For the above problems, this paper proposes a method combining features that extracted by the convolutional neural network (CNN) with the C4.5 classifier to recognize facial expressions, which not only can address the incompleteness of handcrafted features but also can avoid the high hardware configuration in the deep learning model. Considering some problems of overfitting and weak generalization ability of the single classifier, random forest is applied in this paper. Meanwhile, this paper makes some improvements for C4.5 classifier and the traditional random forest in the process of experiments. A large number of experiments have proved the effectiveness and feasibility of the proposed method. Full article
Show Figures

Figure 1

Article
Text and Data Quality Mining in CRIS
Information 2019, 10(12), 374; https://doi.org/10.3390/info10120374 - 28 Nov 2019
Cited by 6 | Viewed by 2125
Abstract
To provide scientific institutions with comprehensive and well-maintained documentation of their research information in a current research information system (CRIS), they have the best prerequisites for the implementation of text and data mining (TDM) methods. Using TDM helps to better identify and eliminate [...] Read more.
To provide scientific institutions with comprehensive and well-maintained documentation of their research information in a current research information system (CRIS), they have the best prerequisites for the implementation of text and data mining (TDM) methods. Using TDM helps to better identify and eliminate errors, improve the process, develop the business, and make informed decisions. In addition, TDM increases understanding of the data and its context. This not only improves the quality of the data itself, but also the institution’s handling of the data and consequently the analyses. This present paper deploys TDM in CRIS to analyze, quantify, and correct the unstructured data and its quality issues. Bad data leads to increased costs or wrong decisions. Ensuring high data quality is an essential requirement when creating a CRIS project. User acceptance in a CRIS depends, among other things, on data quality. Not only is the objective data quality the decisive criterion, but also the subjective quality that the individual user assigns to the data. Full article
(This article belongs to the Special Issue Quality of Open Data)
Show Figures

Figure 1

Article
Adopting Augmented Reality to Engage Higher Education Students in a Museum University Collection: the Experience at Roma Tre University
Information 2019, 10(12), 373; https://doi.org/10.3390/info10120373 - 28 Nov 2019
Cited by 4 | Viewed by 1570
Abstract
University museums are powerful resource centres in higher education. In this context, the adoption of digital technologies can support personalised learning experience within the university museum. The aim of the present contribution is to present a case study carried out at the Department [...] Read more.
University museums are powerful resource centres in higher education. In this context, the adoption of digital technologies can support personalised learning experience within the university museum. The aim of the present contribution is to present a case study carried out at the Department of Educational Sciences at Roma Tre University with a group of 14 master’s degree students. Students were involved in a 2-h workshop in which they were invited to test augmented reality technology through a web app for Android. At the end of the visit participants were required to fill in a questionnaire with both open-ended and closed-ended questions aimed at investigating their ideas on the exhibition and their critical thinking level. Students appreciated the exhibition, especially its multimodality. Most of the frequent themes identified in open-ended answers are related to critical and visual thinking. Despite the positive overall evaluation, there is still room for improvement, both in terms of technology and educational design. Full article
Show Figures

Figure 1

Article
The Capacity of Private Information Retrieval from Decentralized Uncoded Caching Databases
Information 2019, 10(12), 372; https://doi.org/10.3390/info10120372 - 28 Nov 2019
Cited by 15 | Viewed by 1193
Abstract
We consider the private information retrieval (PIR) problem from decentralized uncoded caching databases. There are two phases in our problem setting, a caching phase, and a retrieval phase. In the caching phase, a data center containing all the K files, where each file [...] Read more.
We consider the private information retrieval (PIR) problem from decentralized uncoded caching databases. There are two phases in our problem setting, a caching phase, and a retrieval phase. In the caching phase, a data center containing all the K files, where each file is of size L bits, and several databases with storage size constraint μ K L bits exist in the system. Each database independently chooses μ K L bits out of the total K L bits from the data center to cache through the same probability distribution in a decentralized manner. In the retrieval phase, a user (retriever) accesses N databases in addition to the data center, and wishes to retrieve a desired file privately. We characterize the optimal normalized download cost to be D * = n = 1 N + 1 N n 1 μ n 1 ( 1 μ ) N + 1 n 1 + 1 n + + 1 n K 1. We show that uniform and random caching scheme which is originally proposed for decentralized coded caching by Maddah-Ali and Niesen, along with Sun and Jafar retrieval scheme which is originally proposed for PIR from replicated databases surprisingly results in the lowest normalized download cost. This is the decentralized counterpart of the recent result of Attia, Kumar, and Tandon for the centralized case. The converse proof contains several ingredients such as interference lower bound, induction lemma, replacing queries and answering string random variables with the content of distributed databases, the nature of decentralized uncoded caching databases, and bit marginalization of joint caching distributions. Full article
(This article belongs to the Special Issue Private Information Retrieval: Techniques and Applications)
Show Figures

Figure 1

Article
Design Thinking: Challenges for Software Requirements Elicitation
Information 2019, 10(12), 371; https://doi.org/10.3390/info10120371 - 28 Nov 2019
Cited by 12 | Viewed by 2461
Abstract
Agile methods fit well for software development teams in the requirements elicitation activities. It has brought challenges to organizations in adopting the existing traditional methods, as well as new ones. Design Thinking has been used as a requirements elicitation technique and immersion in [...] Read more.
Agile methods fit well for software development teams in the requirements elicitation activities. It has brought challenges to organizations in adopting the existing traditional methods, as well as new ones. Design Thinking has been used as a requirements elicitation technique and immersion in the process areas, which brings the client closer to the software project team and enables the creation of better projects. With the use of data triangulation, this paper brings a literature review that collected the challenges in software requirements elicitation in agile methodologies and the use of Design Thinking. The result gave way to a case study in a Brazilian public organization project, via user workshop questionnaire with 20 items, applied during the study, in order to identify the practice of Design Thinking in this context. We propose here an overview of 13 studied challenges, from which eight presented strong evidence of contribution (stakeholders involvement, requirements definition and validation, schedule, planning, requirement details and prioritization, and interdependence), three presented partial evidence of contribution and two were not eligible for conclusions (non-functional requirements, use of artifacts, and change of requirements). The main output of this work is to present an analysis of the use of Design Thinking to see if it fits properly to be used as a means of solving the challenges of elicitation of software requirements when using agile methods. Full article
Show Figures

Figure 1

Article
An Optimization Model for Demand-Responsive Feeder Transit Services Based on Ride-Sharing Car
Information 2019, 10(12), 370; https://doi.org/10.3390/info10120370 - 26 Nov 2019
Cited by 6 | Viewed by 1330
Abstract
Ride-sharing (RS) plays an important role in saving energy and alleviating traffic pressure. The vehicles in the demand-responsive feeder transit services (DRT) are generally not ride-sharing cars. Therefore, we proposed an optimal DRT model based on the ride-sharing car, which aimed at assigning [...] Read more.
Ride-sharing (RS) plays an important role in saving energy and alleviating traffic pressure. The vehicles in the demand-responsive feeder transit services (DRT) are generally not ride-sharing cars. Therefore, we proposed an optimal DRT model based on the ride-sharing car, which aimed at assigning a set of vehicles, starting at origin locations and ending at destination locations with their service time windows, to transport passengers of all demand points to the transportation hub (i.e., railway, metro, airport, etc.). The proposed model offered an integrated operation of pedestrian guidance (from unvisited demand points to visited ones) and transit routing (from visited ones to the transportation hub). The objective was to simultaneously minimize weighted passenger walking and riding time. A two-stage heuristic algorithm based on a genetic algorithm (GA) was adopted to solve the problem. The methodology was tested with a case study in Chongqing City, China. The results showed that the model could select optimal pick-up locations and also determine the best pedestrian and route plan. Validation and analysis were also carried out to assess the effect of maximum walking distance and the number of share cars on the model performance, and the difference in quality between the heuristic and optimal solution was also compared. Full article
Show Figures

Figure 1

Article
Some Similarity Measures for Interval-Valued Picture Fuzzy Sets and Their Applications in Decision Making
Information 2019, 10(12), 369; https://doi.org/10.3390/info10120369 - 25 Nov 2019
Cited by 20 | Viewed by 1724
Abstract
Similarity measures, distance measures and entropy measures are some common tools considered to be applied to some interesting real-life phenomena including pattern recognition, decision making, medical diagnosis and clustering. Further, interval-valued picture fuzzy sets (IVPFSs) are effective and useful to describe the fuzzy [...] Read more.
Similarity measures, distance measures and entropy measures are some common tools considered to be applied to some interesting real-life phenomena including pattern recognition, decision making, medical diagnosis and clustering. Further, interval-valued picture fuzzy sets (IVPFSs) are effective and useful to describe the fuzzy information. Therefore, this manuscript aims to develop some similarity measures for IVPFSs due to the significance of describing the membership grades of picture fuzzy set in terms of intervals. Several types cosine similarity measures, cotangent similarity measures, set-theoretic and grey similarity measures, four types of dice similarity measures and generalized dice similarity measures are developed. All the developed similarity measures are validated, and their properties are demonstrated. Two well-known problems, including mineral field recognition problems and multi-attribute decision making problems, are solved using the newly developed similarity measures. The superiorities of developed similarity measures over the similarity measures of picture fuzzy sets, interval-valued intuitionistic fuzzy sets and intuitionistic fuzzy sets are demonstrated through a comparison and numerical examples. Full article
(This article belongs to the Special Issue Big Data Analytics and Computational Intelligence)
Previous Issue
Next Issue
Back to TopTop