Next Issue
Volume 11, February
Previous Issue
Volume 11, December

Table of Contents

Information, Volume 11, Issue 1 (January 2020) – 56 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The blockchain is a universally acclaimed innovation based on distributed ledger technology. It has [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Mixed-Field Source Localization Based on the Non-Hermitian Matrix
Information 2020, 11(1), 56; https://doi.org/10.3390/info11010056 - 20 Jan 2020
Viewed by 93
Abstract
In this paper, an efficient high-order multiple signal classification (MUSIC)-like method is proposed for mixed-field source localization. Firstly, a non-Hermitian matrix is designed based on a high-order cumulant. One of the steering matrices, that is related only with the directions of arrival (DOA), [...] Read more.
In this paper, an efficient high-order multiple signal classification (MUSIC)-like method is proposed for mixed-field source localization. Firstly, a non-Hermitian matrix is designed based on a high-order cumulant. One of the steering matrices, that is related only with the directions of arrival (DOA), is proved to be orthogonal with the eigenvectors corresponding to the zero eigenvalues. The other steering matrix that contains the information of both the DOA and range is proved to span the same column subspace with the eigenvectors corresponding to the non-zero eigenvalues. By applying the Gram–Schmidt orthogonalization, the range estimation can be achieved one by one after substituting each estimated DOA. The analysis shows that the computational complexity of the proposed method is lower than other methods, and the effectiveness of the proposed method is shown with some simulation results. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Open AccessArticle
Giving Teachers a Voice: A Study of Actual Game Use in the Classroom
Information 2020, 11(1), 55; https://doi.org/10.3390/info11010055 - 19 Jan 2020
Viewed by 167
Abstract
The adoption of games in the classroom has been studied from different angles, such as the readiness of teachers to use games or the barriers encountered. However, actual classroom practices with regard to the use of games have not been examined on a [...] Read more.
The adoption of games in the classroom has been studied from different angles, such as the readiness of teachers to use games or the barriers encountered. However, actual classroom practices with regard to the use of games have not been examined on a larger scale. With this research, we gave teachers a voice to report on their actual practices. We examined the current practices of a large sample of Estonian teachers (N = 1258, which constitutes almost 9% of the total Estonian teacher population) in primary and secondary education in 2017. We found that most of the teachers use games on a regular basis. Mainly, they use the games for motivation and alternation, but they also use them to consolidate and teach new skills. While awareness and motivation are high and experimentation on using games is widespread, practices appear fragmentary and not widely sustained. As a result of this study, we suggest the creation of an evidence base and a better integration of social support structures into teacher education. This is the first large-scale study to look into Estonian teacher’s actual practices, and although Estonian teachers have relatively high autonomy and technical skills, we believe that these results and further investigations are applicable in other contexts as well. Full article
(This article belongs to the Special Issue Advances in Mobile Gaming and Games-based Leaning)
Show Figures

Figure 1

Open AccessArticle
Does Information on Automated Driving Functions and the Way of Presenting It before Activation Influence Users’ Behavior and Perception of the System?
Information 2020, 11(1), 54; https://doi.org/10.3390/info11010054 - 18 Jan 2020
Viewed by 146
Abstract
Information on automated driving functions when automation is not activated but is available have not been investigated thus far. As the possibility of conducting non-driving related activities (NDRAs) is one of the most important aspects when it comes to perceived usefulness of automated [...] Read more.
Information on automated driving functions when automation is not activated but is available have not been investigated thus far. As the possibility of conducting non-driving related activities (NDRAs) is one of the most important aspects when it comes to perceived usefulness of automated cars and many NDRAs are time-dependent, users should know the period for which automation is available, even when not activated. This article presents a study (N = 33) investigating the effects of displaying the availability duration before—versus after—activation of the automation on users’ activation behavior and on how the system is rated. Furthermore, the way of addressing users regarding the availability on a more personal level to establish “sympathy” with the system was examined with regard to acceptance, usability, and workload. Results show that displaying the availability duration before activating the automation reduces the frequency of activations when no NDRA is executable within the automated drive. Moreover, acceptance and usability were higher and workload was reduced as a result of this information being provided. No effects were found with regard to how the user was addressed. Full article
(This article belongs to the Special Issue Automotive User Interfaces and Interactions in Automated Driving)
Show Figures

Figure 1

Open AccessArticle
Community Detection Based on a Preferential Decision Model
Information 2020, 11(1), 53; https://doi.org/10.3390/info11010053 - 18 Jan 2020
Viewed by 154
Abstract
The research on complex networks is a hot topic in many fields, among which community detection is a complex and meaningful process, which plays an important role in researching the characteristics of complex networks. Community structure is a common feature in the network. [...] Read more.
The research on complex networks is a hot topic in many fields, among which community detection is a complex and meaningful process, which plays an important role in researching the characteristics of complex networks. Community structure is a common feature in the network. Given a graph, the process of uncovering its community structure is called community detection. Many community detection algorithms from different perspectives have been proposed. Achieving stable and accurate community division is still a non-trivial task due to the difficulty of setting specific parameters, high randomness and lack of ground-truth information. In this paper, we explore a new decision-making method through real-life communication and propose a preferential decision model based on dynamic relationships applied to dynamic systems. We apply this model to the label propagation algorithm and present a Community Detection based on Preferential Decision Model, called CDPD. This model intuitively aims to reveal the topological structure and the hierarchical structure between networks. By analyzing the structural characteristics of complex networks and mining the tightness between nodes, the priority of neighbor nodes is chosen to perform the required preferential decision, and finally the information in the system reaches a stable state. In the experiments, through the comparison of eight comparison algorithms, we verified the performance of CDPD in real-world networks and synthetic networks. The results show that CDPD not only has better performance than most recent algorithms on most datasets, but it is also more suitable for many community networks with ambiguous structure, especially sparse networks. Full article
Show Figures

Figure 1

Open AccessArticle
Blockchain-Based Coordination: Assessing the Expressive Power of Smart Contracts
Information 2020, 11(1), 52; https://doi.org/10.3390/info11010052 - 17 Jan 2020
Viewed by 156
Abstract
A common use case for blockchain smart contracts (SC) is that of governing interaction amongst mutually untrusted parties, by automatically enforcing rules for interaction. However, while many contributions in the literature assess SC computational expressiveness, an evaluation of their power in terms of [...] Read more.
A common use case for blockchain smart contracts (SC) is that of governing interaction amongst mutually untrusted parties, by automatically enforcing rules for interaction. However, while many contributions in the literature assess SC computational expressiveness, an evaluation of their power in terms of coordination (i.e., governing interaction) is still missing. This is why in this paper we test mainstream SC implementations by evaluating their expressive power in coordinating both inter-users and inter-SC activities. To do so, we exploit the archetypal Linda coordination model as a benchmark—a common practice in the field of coordination models and languages—by discussing to what extent mainstream blockchain technologies support its implementation. As they reveal some notable limitations (affecting, in particular, coordination between SC) we then show how Tenderfone, a custom blockchain implementation providing for a more expressive notion of SC, addresses the aforementioned limitations. Full article
(This article belongs to the Special Issue Blockchain Technologies for Multi-Agent Systems)
Show Figures

Figure 1

Open AccessArticle
MANNWARE: A Malware Classification Approach with a Few Samples Using a Memory Augmented Neural Network
Information 2020, 11(1), 51; https://doi.org/10.3390/info11010051 - 17 Jan 2020
Viewed by 111
Abstract
The ability to stop malware as soon as they start spreading will always play an important role in defending computer systems. It must be a huge benefit for organizations as well as society if intelligent defense systems could themselves detect and prevent new [...] Read more.
The ability to stop malware as soon as they start spreading will always play an important role in defending computer systems. It must be a huge benefit for organizations as well as society if intelligent defense systems could themselves detect and prevent new types of malware as soon as they reveal only a tiny amount of samples. An approach introduced in this paper takes advantage of One-shot/Few-shot learning algorithms to solve the malware classification problems using a Memory Augmented Neural Network in combination with the Natural Language Processing techniques such as word2vec, n-gram. We embed the malware’s API calls, which are very valuable sources of information for identifying malware’s behaviors, in the different feature spaces, and then feed them to the one-shot/few-shot learning models. Evaluating the model on the two datasets (FFRI 2017 and APIMDS) shows that the models with different parameters could yield high accuracy on malware classification with only a few samples. For example, on the APIMDS dataset, it was able to guess 78.85% correctly after seeing only nine malware samples and 89.59% after fine-tuning with a few other samples. The results confirmed very good accuracies compared to the other traditional methods, and point to a new area of malware research. Full article
(This article belongs to the Special Issue Machine Learning for Cyber-Security)
Show Figures

Figure 1

Open AccessEditorial
Acknowledgement to Reviewers of Information in 2019
Information 2020, 11(1), 50; https://doi.org/10.3390/info11010050 - 16 Jan 2020
Viewed by 159
Open AccessArticle
Ordered Electric Vehicles Charging Scheduling Algorithm Based on Bidding in Residential Area
Information 2020, 11(1), 49; https://doi.org/10.3390/info11010049 - 16 Jan 2020
Viewed by 110
Abstract
With the rise of electric vehicles, the key of electric vehicle charging is how to charge them in residential areas and other closed environments. Addressing this problem is extremely important for avoiding adverse effects on the load and stability of the neighboring grids [...] Read more.
With the rise of electric vehicles, the key of electric vehicle charging is how to charge them in residential areas and other closed environments. Addressing this problem is extremely important for avoiding adverse effects on the load and stability of the neighboring grids where multi-user centralized charging takes place. Therefore, we propose a charging dynamic scheduling algorithm based on user bidding. First, we determine the user charging priority according to bidding. Then, we design a resource allocation policy based on game theory, which could assign charge slots for users. Due to users leaving and urgent user needs, we found an alternate principle that can improve the flexibility slot utilization of charging. Simulation results show that the algorithm could meet the priority needs of users with higher charging prices and timely responses to requests. Meanwhile, this algorithm can ensure orderly electric vehicle charging, improve power utilization efficiency, and ease pressure on grid loads. Full article
Show Figures

Figure 1

Open AccessArticle
A Revision of the Buechner–Tavani Model of Digital Trust and a Philosophical Problem It Raises for Social Robotics
Information 2020, 11(1), 48; https://doi.org/10.3390/info11010048 - 16 Jan 2020
Viewed by 125
Abstract
In this paper the Buechner–Tavani model of digital trust is revised—new conditions for self-trust are incorporated into the model. These new conditions raise several philosophical problems concerning the idea of a substantial self for social robotics, which are closely examined. I conclude that [...] Read more.
In this paper the Buechner–Tavani model of digital trust is revised—new conditions for self-trust are incorporated into the model. These new conditions raise several philosophical problems concerning the idea of a substantial self for social robotics, which are closely examined. I conclude that reductionism about the self is incompatible with, while the idea of a substantial self is compatible with, trust relations between human agents, between human agents and artificial agents, and between artificial agents. Full article
(This article belongs to the Special Issue Advances in Social Robots)
Open AccessArticle
High-Fidelity Router Emulation Technologies Based on Multi-Scale Virtualization
Information 2020, 11(1), 47; https://doi.org/10.3390/info11010047 - 16 Jan 2020
Viewed by 121
Abstract
Virtualization has the advantages of strong scalability and high fidelity in host node emulation. It can effectively meet the requirements of network emulation, including large scale, high fidelity, and flexible construction. However, for router emulation, virtual routers built with virtualization and routing software [...] Read more.
Virtualization has the advantages of strong scalability and high fidelity in host node emulation. It can effectively meet the requirements of network emulation, including large scale, high fidelity, and flexible construction. However, for router emulation, virtual routers built with virtualization and routing software use Linux Traffic Control to emulate bandwidth, delay, and packet loss rates, which results in serious distortions in congestion scenarios. Motivated by this deficiency, we propose a novel router emulation method that consists of virtualization plane, routing plane, and a traffic control method. We designed and implemented our traffic control module in multi-scale virtualization, including the kernel space of a KVM-based virtual router and the user space of a Docker-based virtual router. Experiments show not only that the proposed method achieves high-fidelity router emulation, but also that its performance is consistent with that of a physical router in congestion scenarios. These findings provide good support for network research into congestion scenarios on virtualization-based emulation platforms. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Open AccessArticle
Linguistic Pythagorean Einstein Operators and Their Application to Decision Making
Information 2020, 11(1), 46; https://doi.org/10.3390/info11010046 - 16 Jan 2020
Viewed by 102
Abstract
Linguistic Pythagorean fuzzy (LPF) set is an efficacious technique to comprehensively represent uncertain assessment information by combining the Pythagorean fuzzy numbers and linguistic variables. In this paper, we define several novel essential operations of LPF numbers based upon Einstein operations and discuss several [...] Read more.
Linguistic Pythagorean fuzzy (LPF) set is an efficacious technique to comprehensively represent uncertain assessment information by combining the Pythagorean fuzzy numbers and linguistic variables. In this paper, we define several novel essential operations of LPF numbers based upon Einstein operations and discuss several relations between these operations. For solving the LPF numbers fusion problem, several LPF aggregation operators, including LPF Einstein weighted averaging (LPFEWA) operator, LPF Einstein weighted geometric (LPFEWG) operator and LPF Einstein hybrid operator, are propounded; the prominent characteristics of these operators are investigated as well. Furthermore, a multi-attribute group decision making (MAGDM) approach is presented on the basis of the developed operators under an LPF environment. Ultimately, two application cases are utilized to demonstrate the practicality and feasibility of the developed decision approach and the comparison analysis is provided to manifest the merits of it. Full article
(This article belongs to the Special Issue Artificial Intelligence and Decision Support Systems)
Show Figures

Figure 1

Open AccessArticle
CWPC_BiAtt: Character–Word–Position Combined BiLSTM-Attention for Chinese Named Entity Recognition
Information 2020, 11(1), 45; https://doi.org/10.3390/info11010045 - 15 Jan 2020
Viewed by 185
Abstract
Usually taken as linguistic features by Part-Of-Speech (POS) tagging, Named Entity Recognition (NER) is a major task in Natural Language Processing (NLP). In this paper, we put forward a new comprehensive-embedding, considering three aspects, namely character-embedding, word-embedding, and pos-embedding stitched in the order [...] Read more.
Usually taken as linguistic features by Part-Of-Speech (POS) tagging, Named Entity Recognition (NER) is a major task in Natural Language Processing (NLP). In this paper, we put forward a new comprehensive-embedding, considering three aspects, namely character-embedding, word-embedding, and pos-embedding stitched in the order we give, and thus get their dependencies, based on which we propose a new Character–Word–Position Combined BiLSTM-Attention (CWPC_BiAtt) for the Chinese NER task. Comprehensive-embedding via the Bidirectional Llong Short-Term Memory (BiLSTM) layer can get the connection between the historical and future information, and then employ the attention mechanism to capture the connection between the content of the sentence at the current position and that at any location. Finally, we utilize Conditional Random Field (CRF) to decode the entire tagging sequence. Experiments show that CWPC_BiAtt model we proposed is well qualified for the NER task on Microsoft Research Asia (MSRA) dataset and Weibo NER corpus. A high precision and recall were obtained, which verified the stability of the model. Position-embedding in comprehensive-embedding can compensate for attention-mechanism to provide position information for the disordered sequence, which shows that comprehensive-embedding has completeness. Looking at the entire model, our proposed CWPC_BiAtt has three distinct characteristics: completeness, simplicity, and stability. Our proposed CWPC_BiAtt model achieved the highest F-score, achieving the state-of-the-art performance in the MSRA dataset and Weibo NER corpus. Full article
Show Figures

Figure 1

Open AccessArticle
Dramatically Reducing Search for High Utility Sequential Patterns by Maintaining Candidate Lists
Information 2020, 11(1), 44; https://doi.org/10.3390/info11010044 - 15 Jan 2020
Viewed by 140
Abstract
A ubiquitous challenge throughout all areas of data mining, particularly in the mining of frequent patterns in large databases, is centered on the necessity to reduce the time and space required to perform the search. The extent of this reduction proportionally facilitates the [...] Read more.
A ubiquitous challenge throughout all areas of data mining, particularly in the mining of frequent patterns in large databases, is centered on the necessity to reduce the time and space required to perform the search. The extent of this reduction proportionally facilitates the ability to identify patterns of interest. High utility sequential pattern mining (HUSPM) seeks to identify frequent patterns that are (1) sequential in nature and (2) hold a significant magnitude of utility in a sequence database, by considering the aspect of item value or importance. While traditional sequential pattern mining relies on the downward closure property to significantly reduce the required search space, with HUSPM, this property does not hold. To address this drawback, an approach is proposed that establishes a tight upper bound on the utility of future candidate sequential patterns by maintaining a list of items that are deemed potential candidates for concatenation. Such candidates are provably the only items that are ever needed for any extension of a given sequential pattern or its descendants in the search tree. This list is then exploited to significantly further tighten the upper bound on the utilities of descendent patterns. An extension of this work is then proposed that significantly reduces the computational cost of updating database utilities each time a candidate item is removed from the list, resulting in a massive reduction in the number of candidate sequential patterns that need to be generated in the search. Sequential pattern mining methods implementing these new techniques for bound reduction and further candidate list reduction are demonstrated via the introduction of the CRUSP and CRUSPPivot algorithms, respectively. Validation of the techniques was conducted on six public datasets. Tests show that use of the CRUSP algorithm results in a significant reduction in the overall number of candidate sequential patterns that need to be considered, and subsequently a significant reduction in run time, when compared to the current state of the art in bounding techniques. When employing the CRUSPPivot algorithm, the further reduction in the size of the search space was found to be dramatic, with the reduction in run time found to be dramatic to moderate, depending on the dataset. Demonstrating the practical significance of the work, experiments showed that time required for one particularly complex dataset was reduced from many hours to less than one minute. Full article
(This article belongs to the Special Issue Big Data Research, Development, and Applications––Big Data 2018)
Show Figures

Figure 1

Open AccessReview
What Makes a Social Robot Good at Interacting with Humans?
Information 2020, 11(1), 43; https://doi.org/10.3390/info11010043 - 13 Jan 2020
Viewed by 179
Abstract
This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with [...] Read more.
This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: “Do social robots need to look like living creatures that already exist in the world for humans to interact well with them?”; “Do social robots need to have animated faces for humans to interact well with them?”; “Do social robots need to have the ability to speak a coherent human language for humans to interact well with them?” and “Do social robots need to have the capability to make physical gestures for humans to interact well with them?”. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethical/moral concerns have also been discussed. Full article
(This article belongs to the Special Issue Advances in Social Robots)
Show Figures

Figure 1

Open AccessArticle
Recursive Matrix Calculation Paradigm by the Example of Structured Matrix
Information 2020, 11(1), 42; https://doi.org/10.3390/info11010042 - 13 Jan 2020
Viewed by 151
Abstract
In this paper, we derive recursive algorithms for calculating the determinant and inverse of the generalized Vandermonde matrix. The main advantage of the recursive algorithms is the fact that the computational complexity of the presented algorithm is better than calculating the determinant and [...] Read more.
In this paper, we derive recursive algorithms for calculating the determinant and inverse of the generalized Vandermonde matrix. The main advantage of the recursive algorithms is the fact that the computational complexity of the presented algorithm is better than calculating the determinant and the inverse by means of classical methods, developed for the general matrices. The results of this article do not require any symbolic calculations and, therefore, can be performed by a numerical algorithm implemented in a specialized (like Matlab or Mathematica) or general-purpose programming language (C, C++, Java, Pascal, Fortran, etc.). Full article
(This article belongs to the Special Issue Selected Papers from ESM 2019)
Show Figures

Figure 1

Open AccessArticle
Viability of Neural Networks for Core Technologies for Resource-Scarce Languages
Information 2020, 11(1), 41; https://doi.org/10.3390/info11010041 - 12 Jan 2020
Viewed by 186
Abstract
In this paper, the viability of neural network implementations of core technologies (the focus of this paper is on text technologies) for 10 resource-scarce South African languages is evaluated. Neural networks are increasingly being used in place of other machine learning methods for [...] Read more.
In this paper, the viability of neural network implementations of core technologies (the focus of this paper is on text technologies) for 10 resource-scarce South African languages is evaluated. Neural networks are increasingly being used in place of other machine learning methods for many natural language processing tasks with good results. However, in the South African context, where most languages are resource-scarce, very little research has been done on neural network implementations of core language technologies. In this paper, we address this gap by evaluating neural network implementations of four core technologies for ten South African languages. The technologies we address are part of speech tagging, named entity recognition, compound analysis and lemmatization. Neural architectures that performed well on similar tasks in other settings were implemented for each task and the performance was assessed in comparison with currently used machine learning implementations of each technology. The neural network models evaluated perform better than the baselines for compound analysis, are viable and comparable to the baseline on most languages for POS tagging and NER, and are viable, but not on par with the baseline, for Afrikaans lemmatization. Full article
(This article belongs to the Special Issue Computational Linguistics for Low-Resource Languages)
Open AccessArticle
Comparing Web Accessibility Evaluation Tools and Evaluating the Accessibility of Webpages: Proposed Frameworks
Information 2020, 11(1), 40; https://doi.org/10.3390/info11010040 - 12 Jan 2020
Viewed by 207
Abstract
With the growth of e-services in the past two decades, the concept of web accessibility has been given attention to ensure that every individual can benefit from these services without any barriers. Web accessibility is considered one of the main factors that should [...] Read more.
With the growth of e-services in the past two decades, the concept of web accessibility has been given attention to ensure that every individual can benefit from these services without any barriers. Web accessibility is considered one of the main factors that should be taken into consideration while developing webpages. Web Content Accessibility Guidelines 2.0 (WCAG 2.0) have been developed to guide web developers to ensure that web contents are accessible for all users, especially disabled users. Many automatic tools have been developed to check the compliance of websites with accessibility guidelines such as WCAG 2.0 and to help web developers and content creators with designing webpages without barriers for disabled people. Despite the popularity of accessibility evaluation tools in practice, there is no systematic way to compare the performance of web accessibility evaluators. This paper first presents two novel frameworks. The first one is proposed to compare the performance of web accessibility evaluation tools in detecting web accessibility issues based on WCAG 2.0. The second framework is utilized to evaluate webpages in meeting these guidelines. Six homepages of Saudi universities were chosen as case studies to substantiate the concept of the proposed frameworks. Furthermore, two popular web accessibility evaluators, Wave and SiteImprove, are selected to compare their performance. The outcomes of studies conducted using the first proposed framework showed that SiteImprove outperformed WAVE. According to the outcomes of the studies conducted, we can conclude that web administrators would benefit from the first framework in selecting an appropriate tool based on its performance to evaluate their websites based on accessibility criteria and guidelines. Moreover, the findings of the studies conducted using the second proposed framework showed that the homepage of Taibah University is more accessible than the homepages of other Saudi universities. Based on the findings of this study, the second framework can be used by web administrators and developers to measure the accessibility of their websites. This paper also discusses the most common accessibility issues reported by WAVE and SiteImprove. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Open AccessArticle
Artificial Intelligence-Enhanced Decision Support for Informing Global Sustainable Development: A Human-Centric AI-Thinking Approach
Information 2020, 11(1), 39; https://doi.org/10.3390/info11010039 - 11 Jan 2020
Viewed by 323
Abstract
Sustainable development is crucial to humanity. Utilization of primary socio-environmental data for analysis is essential for informing decision making by policy makers about sustainability in development. Artificial intelligence (AI)-based approaches are useful for analyzing data. However, it was not easy for people who [...] Read more.
Sustainable development is crucial to humanity. Utilization of primary socio-environmental data for analysis is essential for informing decision making by policy makers about sustainability in development. Artificial intelligence (AI)-based approaches are useful for analyzing data. However, it was not easy for people who are not trained in computer science to use AI. The significance and novelty of this paper is that it shows how the use of AI can be democratized via a user-friendly human-centric probabilistic reasoning approach. Using this approach, analysts who are not computer scientists can also use AI to analyze sustainability-related EPI data. Further, this human-centric probabilistic reasoning approach can also be used as cognitive scaffolding to educe AI-Thinking in the analysts to ask more questions and provide decision making support to inform policy making in sustainable development. This paper uses the 2018 Environmental Performance Index (EPI) data from 180 countries which includes performance indicators covering environmental health and ecosystem vitality. AI-based predictive modeling techniques are applied on 2018 EPI data to reveal the hidden tensions between the two fundamental dimensions of sustainable development: (1) environmental health; which improves with economic growth and increasing affluence; and (2) ecosystem vitality, which worsens due to industrialization and urbanization. Full article
(This article belongs to the Special Issue Artificial Intelligence and Decision Support Systems)
Show Figures

Figure 1

Open AccessArticle
Optimal Feature Aggregation and Combination for Two-Dimensional Ensemble Feature Selection
Information 2020, 11(1), 38; https://doi.org/10.3390/info11010038 - 10 Jan 2020
Viewed by 171
Abstract
Feature selection is a way of reducing the features of data such that, when the classification algorithm runs, it produces better accuracy. In general, conventional feature selection is quite unstable when faced with changing data characteristics. It would be inefficient to implement individual [...] Read more.
Feature selection is a way of reducing the features of data such that, when the classification algorithm runs, it produces better accuracy. In general, conventional feature selection is quite unstable when faced with changing data characteristics. It would be inefficient to implement individual feature selection in some cases. Ensemble feature selection exists to overcome this problem. However, with the advantages of ensemble feature selection, some issues like stability, threshold, and feature aggregation still need to be overcome. We propose a new framework to deal with stability and feature aggregation. We also used an automatic threshold to see whether it was efficient or not; the results showed that the proposed method always produces the best performance in both accuracy and feature reduction. The accuracy comparison between the proposed method and other methods was 0.5–14% and reduced more features than other methods by 50%. The stability of the proposed method was also excellent, with an average of 0.9. However, when we applied the automatic threshold, there was no beneficial improvement compared to without an automatic threshold. Overall, the proposed method presented excellent performance compared to previous work and standard ReliefF. Full article
Show Figures

Figure 1

Open AccessArticle
Opportunistic Multi-Technology Cooperative Scheme and UAV Relaying for Network Disaster Recovery
Information 2020, 11(1), 37; https://doi.org/10.3390/info11010037 - 10 Jan 2020
Viewed by 235
Abstract
Disaster scenarios are particularly catastrophic in urban environments, which are very densely populated in many cases. Disasters not only endanger the life of people, but also affect the existing communication infrastructures. In fact, such an infrastructure could be completely destroyed or damaged; even [...] Read more.
Disaster scenarios are particularly catastrophic in urban environments, which are very densely populated in many cases. Disasters not only endanger the life of people, but also affect the existing communication infrastructures. In fact, such an infrastructure could be completely destroyed or damaged; even when it continues working, it suffers from high access demand to its limited resources within a short period of time. This work evaluates the performances of smartphones and leverages the ubiquitous presence of mobile devices in urban scenarios to assist search and rescue activities following a disaster. Specifically, it proposes a collaborative protocol that opportunistically organizes mobile devices in multiple tiers by targeting a fair energy consumption in the whole network. Moreover, it introduces a data collection scheme that employs drones to scan the disaster area and to visit mobile devices and collect their data in a short time. Simulation results in realistic settings show that the proposed solution balances the energy consumption in the network by means of efficient drone routes and smart self-organization, thereby effectively assisting search and rescue operations. Full article
(This article belongs to the Special Issue Applications in Opportunistic Networking)
Show Figures

Figure 1

Open AccessArticle
Upgrading Physical Layer of Multi-Carrier OGFDM Waveform for Improving Wireless Channel Capacity of 5G Mobile Networks and Beyond
Information 2020, 11(1), 35; https://doi.org/10.3390/info11010035 - 10 Jan 2020
Viewed by 318
Abstract
On the brink of sophisticated generations of mobile starting with the fifth-generation (5G) and moving on to the future mobile technologies, the necessity for developing the wireless telecommunications waveform is extremely required. The main reason beyond this is to support the future digital [...] Read more.
On the brink of sophisticated generations of mobile starting with the fifth-generation (5G) and moving on to the future mobile technologies, the necessity for developing the wireless telecommunications waveform is extremely required. The main reason beyond this is to support the future digital lifestyle that tends principally to maximize wireless channel capacity and number of connected users. In this paper, the upgraded design of the multi-carrier orthogonal generalized frequency division multiplexing (OGFDM) that aims to enlarge the number of mobile subscribers yet sustaining each one with a high transmission capacity is presented, explored, and evaluated. The expanded multi-carrier OGFDM can improve the performance of the future wireless network that targets equally the broad sharing operation (scalability) and elevated transmission rate. From a spectrum perspective, the upgraded OGFDM can manipulate the side effect of the increased number of network subscribers on the transmission bit-rate for each frequency subcarrier. This primarily can be achieved by utilizing the developed OGFDM features, like acceleration ability, filter orthogonality, interference avoidance, subcarrier scalability, and flexible bit loading. Consequently, the introduced OGFDM can supply lower latency, better BW efficiency, higher robustness, wider sharing, and more resilient bit loading than the current waveform. To highlight the main advantages of the proposed OGFDM, the system performance is compared with the initial design of the multicarrier OGFDM side by side with the 5G waveform generalized frequency division multiplexing (GFDM). The experimented results show that by moving from both the conventional OGFDM and GFDM with 4 GHz to the advanced OGFDM with 6 GHz, the gained channel capacity is improved. Because of the efficient use of Hilbert filters and improved rate of sampling acceleration, the upgraded system can gain about 3 dB and 1.5 dB increments in relative to the OGFDM and GFDM respectively. This, as a result, can maximize mainly the overall channel capacity of the enhanced OGFDM, which in turn can raise the bit-rate of each user in the mobile network. In addition, by employing the OGFDM with the dual oversampling, the achieved channel capacity in worst transmission condition is increased to around six and twelve times relative to the OGFDM and GFDM with the normal oversampling. Furthermore, applying the promoted OGFDM with the adaptive modulation comes up with maximizing the overall channel capacity up to around 1.66 dB and 3.32 dB compared to the initial OGFDM and GFDM respectively. A MATLAB simulation is applied to evaluate the transmission performance in terms of the channel capacity and the bit error rate (BER) in an electrical back-to-back wireless transmission system. Full article
(This article belongs to the Special Issue Emerging Topics in Wireless Communications for Future Smart Cities)
Show Figures

Figure 1

Open AccessArticle
Vehicle Routing Optimization of Instant Distribution Routing Based on Customer Satisfaction
Information 2020, 11(1), 36; https://doi.org/10.3390/info11010036 - 09 Jan 2020
Viewed by 241
Abstract
Since the actual factors in the instant distribution service scenario are not considered enough in the existing distribution route optimization, a route optimization model of the instant distribution system based on customer time satisfaction is proposed. The actual factors in instant distribution, such [...] Read more.
Since the actual factors in the instant distribution service scenario are not considered enough in the existing distribution route optimization, a route optimization model of the instant distribution system based on customer time satisfaction is proposed. The actual factors in instant distribution, such as the soft time window, the pay-to-order mechanism, the time for the merchant to prepare goods before delivery, and the deliveryman’s order combining, were incorporated in the model. A multi-objective optimization framework based on the total cost function and time satisfaction of the customer was established. Dual-layer chromosome coding based on the deliveryman-to-node mapping and the access order was conducted, and the nondominated sorting genetic algorithm version II (NSGA-II) was used to solve the problem. According to the numerical results, when time satisfaction of the customer was considered in the instant distribution routing problem, the customer satisfaction increased effectively and the balance between customer satisfaction and delivery cost in the means of Pareto optimization were obtained, with a minor increase in the delivery cost, while the number of deliverymen slightly increased to meet the on-time delivery needs of customers. Full article
Show Figures

Figure 1

Open AccessArticle
Simulating, Off-Chain and On-Chain: Agent-Based Simulations in Cross-Organizational Business Processes
Information 2020, 11(1), 34; https://doi.org/10.3390/info11010034 - 07 Jan 2020
Viewed by 302
Abstract
Information systems execute increasingly complex business processes, often across organizations. Blockchain technology has emerged as a potential facilitator of (semi)-autonomous cross-organizational business process execution; in particular, so-called consortium blockchains can be considered as promising enablers in this context, as they do not require [...] Read more.
Information systems execute increasingly complex business processes, often across organizations. Blockchain technology has emerged as a potential facilitator of (semi)-autonomous cross-organizational business process execution; in particular, so-called consortium blockchains can be considered as promising enablers in this context, as they do not require the use of cryptocurrency-based blockchain technology, as long as the trusted (authenticated) members of the network are willing to provide computing resources for consensus-finding. However, increased autonomy in the execution of business processes also requires the delegation of business decisions to machines. To support complex decision-making processes by assessing potential future outcomes, agent-based simulations can be considered a useful tool for the autonomous enterprise. In this paper, we explore the intersection of multi-agent simulations and consortium blockchain technology in the context of enterprise applications by devising architectures and technology stacks for both off-chain and on-chain agent-based simulation in the context of blockchain-based business process execution. Full article
(This article belongs to the Special Issue Blockchain Technologies for Multi-Agent Systems)
Show Figures

Figure 1

Open AccessArticle
Role of Personalization in Continuous Use Intention of Mobile News Apps in India: Extending the UTAUT2 Model
Information 2020, 11(1), 33; https://doi.org/10.3390/info11010033 - 07 Jan 2020
Viewed by 279
Abstract
The aim of this study was to empirically examine the extended unified theory of acceptance and use of technology 2 (UTAUT2) model by adding “personalization” as one of the antecedents, as well as a moderator to determine the key factors for the continuous [...] Read more.
The aim of this study was to empirically examine the extended unified theory of acceptance and use of technology 2 (UTAUT2) model by adding “personalization” as one of the antecedents, as well as a moderator to determine the key factors for the continuous use intention of mobile news applications (apps). For this study, an online and manual sample survey of 309 respondents, who had used the news app earlier, was collected and analyzed, using quantitative methods such as explanatory and confirmatory factor analysis, structural equation modeling, and Hayes process for finding moderating effects among variables. The findings of the direct effect demonstrated that performance expectancy (PE) has the most influential effect on continuous use intention, followed by habit (HT), hedonic motivation (HM), and facilitating conditions (FC). Furthermore, the outcome of tests for the moderating effect of personalization between UTAUT2 constructs and continuous use intention (CUI) showed that personalization has a significant moderating effect on performance expectancy and habit. Therefore, this research establishes the key role of PE, HT, HM, and FC as main factors that trigger the users’ continuous use intention of news apps and provides an integrated framework to assess the moderating effect of personalization on technology acceptance. The findings of the research expand the existing literature on news applications and provide foundation for future research studies in the area of mobile news apps. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

Open AccessArticle
Short-Term Solar Irradiance Forecasting Based on a Hybrid Deep Learning Methodology
Information 2020, 11(1), 32; https://doi.org/10.3390/info11010032 - 06 Jan 2020
Viewed by 256
Abstract
Accurate prediction of solar irradiance is beneficial in reducing energy waste associated with photovoltaic power plants, preventing system damage caused by the severe fluctuation of solar irradiance, and stationarizing the power output integration between different power grids. Considering the randomness and multiple dimension [...] Read more.
Accurate prediction of solar irradiance is beneficial in reducing energy waste associated with photovoltaic power plants, preventing system damage caused by the severe fluctuation of solar irradiance, and stationarizing the power output integration between different power grids. Considering the randomness and multiple dimension of weather data, a hybrid deep learning model that combines a gated recurrent unit (GRU) neural network and an attention mechanism is proposed forecasting the solar irradiance changes in four different seasons. In the first step, the Inception neural network and ResNet are designed to extract features from the original dataset. Secondly, the extracted features are inputted into the recurrent neural network (RNN) network for model training. Experimental results show that the proposed hybrid deep learning model accurately predicts solar irradiance changes in a short-term manner. In addition, the forecasting performance of the model is better than traditional deep learning models (such as long short term memory and GRU). Full article
(This article belongs to the Special Issue Machine Learning on Scientific Data and Information)
Show Figures

Figure 1

Open AccessArticle
Semi-Automatic Corpus Expansion and Extraction of Uyghur-Named Entities and Relations Based on a Hybrid Method
Information 2020, 11(1), 31; https://doi.org/10.3390/info11010031 - 06 Jan 2020
Viewed by 269
Abstract
Relation extraction is an important task with many applications in natural language processing, such as structured knowledge extraction, knowledge graph construction, and automatic question answering system construction. However, relatively little past work has focused on the construction of the corpus and extraction of [...] Read more.
Relation extraction is an important task with many applications in natural language processing, such as structured knowledge extraction, knowledge graph construction, and automatic question answering system construction. However, relatively little past work has focused on the construction of the corpus and extraction of Uyghur-named entity relations, resulting in a very limited availability of relation extraction research and a deficiency of annotated relation data. This issue is addressed in the present article by proposing a hybrid Uyghur-named entity relation extraction method that combines a conditional random field model for making suggestions regarding annotation based on extracted relations with a set of rules applied by human annotators to rapidly increase the size of the Uyghur corpus. We integrate our relation extraction method into an existing annotation tool, and, with the help of human correction, we implement Uyghur relation extraction and expand the existing corpus. The effectiveness of our proposed approach is demonstrated based on experimental results by using an existing Uyghur corpus, and our method achieves a maximum weighted average between precision and recall of 61.34%. The method we proposed achieves state-of-the-art results on entity and relation extraction tasks in Uyghur. Full article
Show Figures

Figure 1

Open AccessArticle
Named-Entity Recognition in Sports Field Based on a Character-Level Graph Convolutional Network
Information 2020, 11(1), 30; https://doi.org/10.3390/info11010030 - 05 Jan 2020
Viewed by 319
Abstract
Traditional methods for identifying naming ignore the correlation between named entities and lose hierarchical structural information between the named entities in a given text. Although traditional named-entity methods are effective for conventional datasets that have simple structures, they are not as effective for [...] Read more.
Traditional methods for identifying naming ignore the correlation between named entities and lose hierarchical structural information between the named entities in a given text. Although traditional named-entity methods are effective for conventional datasets that have simple structures, they are not as effective for sports texts. This paper proposes a Chinese sports text named-entity recognition method based on a character graph convolutional neural network (Char GCN) with a self-attention mechanism model. In this method, each Chinese character in the sports text is regarded as a node. The edge between the nodes is constructed using a similar character position and the character feature of the named-entity in the sports text. The internal structural information of the entity is extracted using a character map convolutional neural network. The hierarchical semantic information of the sports text is captured by the self-attention model to enhance the relationship between the named entities and capture the relevance and dependency between the characters. The conditional random fields classification function can accurately identify the named entities in the Chinese sports text. The results conducted on four datasets demonstrate that the proposed method improves the F-Score values significantly to 92.51%, 91.91%, 93.98%, and 95.01%, respectively, in comparison to the traditional naming methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
Importance Analysis of Components of a Multi-Operational-State Power System Using Fault Tree Models
Information 2020, 11(1), 29; https://doi.org/10.3390/info11010029 - 05 Jan 2020
Viewed by 251
Abstract
This article describes a case study using a fault tree analysis for a multi-operational-state system (system with several operational states) model with many different technical solutions for the power system of a fishing vessel. We describe the essence of system dependability metamodeling. A [...] Read more.
This article describes a case study using a fault tree analysis for a multi-operational-state system (system with several operational states) model with many different technical solutions for the power system of a fishing vessel. We describe the essence of system dependability metamodeling. A vector of external events was used to construct a detailed metamodel, depending on the operational status being modeled. In a fault tree, individual external events modify the structure of a system. The analysis includes the following operational states: sea voyages of a vessel, hauling in and paying out nets, trawling, staying in a port, and heaving to. For each operational state and assumed system configurations, the importance of system components was determined by calculating the Vesely–Fussell measures. The most important components for each operational state of a system were determined, and the critical system components, that is, those that are important in every operational state and system configuration, were identified. Full article
Show Figures

Figure 1

Open AccessArticle
Execution Plan Control in Dynamic Coalition of Robots with Smart Contracts and Blockchain
Information 2020, 11(1), 28; https://doi.org/10.3390/info11010028 - 04 Jan 2020
Viewed by 250
Abstract
The paper presents an approach of the blockchain and smart contracts utilization for dynamic robot coalition creation. The coalition is forming for solving complex tasks in industry applications that requires sequential united actions from the several robots. The main idea is that the [...] Read more.
The paper presents an approach of the blockchain and smart contracts utilization for dynamic robot coalition creation. The coalition is forming for solving complex tasks in industry applications that requires sequential united actions from the several robots. The main idea is that the process is split into two stages: scheduling and dynamic execution. On the scheduling stage, the coalition is defined based on the correlation of existing tasks and robot equipment, and the execution plan is formed and stored in smart contracts. The second stage is the plan execution. During this stage, smart contract controls how each robot solves its sub-task and whether it solves the sub-task due to the planned moment of time. In case of any deviation from the plan, smart contacts will provide a solution for returning to the plan or for changing the coalition composition with new robots and an execution plan. The prototype for execution control system has been developed based on the Hyperledger Fabric platform. Full article
(This article belongs to the Special Issue Blockchain and Smart Contract Technologies)
Show Figures

Figure 1

Open AccessArticle
K-Means Clustering-Based Electrical Equipment Identification for Smart Building Application
Information 2020, 11(1), 27; https://doi.org/10.3390/info11010027 - 01 Jan 2020
Viewed by 284
Abstract
With the development and popular application of Building Internet of Things (BIoT) systems, numerous types of equipment are connected, and a large volume of equipment data is collected. For convenient equipment management, the equipment should be identified and labeled. Traditionally, this process is [...] Read more.
With the development and popular application of Building Internet of Things (BIoT) systems, numerous types of equipment are connected, and a large volume of equipment data is collected. For convenient equipment management, the equipment should be identified and labeled. Traditionally, this process is performed manually, which not only is time consuming but also causes unavoidable omissions. In this paper, we propose a k-means clustering-based electrical equipment identification toward smart building application that can automatically identify the unknown equipment connected to BIoT systems. First, load characteristics are analyzed and electrical features for equipment identification are extracted from the collected data. Second, k-means clustering is used twice to construct the identification model. Preliminary clustering adopts traditional k-means algorithm to the total harmonic current distortion data and separates equipment data into two to three clusters on the basis of their electrical characteristics. Later clustering uses an improved k-means algorithm, which weighs Euclidean distance and uses the elbow method to determine the number of clusters and analyze the results of preliminary clustering. Then, the equipment identification model is constructed by selecting the cluster centroid vector and distance threshold. Finally, identification results are obtained online on the basis of the model outputs by using the newly collected data. Successful applications to BIoT system verify the validity of the proposed identification method. Full article
(This article belongs to the Section Information Applications)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop