Previous Issue

Table of Contents

Information, Volume 10, Issue 6 (June 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Online Social Networks (OSNs) have found widespread applications in every area of our life. A large [...] Read more.
View options order results:
result details:
Displaying articles 1-35
Export citation of selected articles as:
Open AccessArticle
Drowsiness Estimation Using Electroencephalogram and Recurrent Support Vector Regression
Information 2019, 10(6), 217; https://doi.org/10.3390/info10060217 (registering DOI)
Received: 24 May 2019 / Revised: 18 June 2019 / Accepted: 23 June 2019 / Published: 24 June 2019
PDF Full-text (1333 KB) | HTML Full-text | XML Full-text
Abstract
As a cause of accidents, drowsiness can cause economical and physical damage. A range of drowsiness estimation methods have been proposed in previous studies to aid accident prevention and address this problem. However, none of these methods are able to improve their estimation [...] Read more.
As a cause of accidents, drowsiness can cause economical and physical damage. A range of drowsiness estimation methods have been proposed in previous studies to aid accident prevention and address this problem. However, none of these methods are able to improve their estimation ability as the length of time or number of trials increases. Thus, in this study, we aim to find an effective drowsiness estimation method that is also able to improve its prediction ability as the subject’s activity increases. We used electroencephalogram (EEG) data to estimate drowsiness, and the Karolinska sleepiness scale (KSS) for drowsiness evaluation. Five parameters (α, β/α, (θ+α)/β, activity, and mobility) from the O1 electrode site were selected. By combining these parameters and KSS, we demonstrate that a typical support vector regression (SVR) algorithm can estimate drowsiness with a correlation coefficient (R2) of up to 0.64 and a root mean square error (RMSE) of up to 0.56. We propose a “recurrent SVR” (RSVR) method with improved estimation performance, as highlighted by an R2 value of up to 0.83, and an RMSE of up to 0.15. These results suggest that in addition to being able to estimate drowsiness based on EEG data, RSVR is able to improve its drowsiness estimation performance. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle
Managing Software Security Knowledge in Context: An Ontology Based Approach
Information 2019, 10(6), 216; https://doi.org/10.3390/info10060216
Received: 29 May 2019 / Revised: 16 June 2019 / Accepted: 18 June 2019 / Published: 20 June 2019
Viewed by 176 | PDF Full-text (745 KB)
Abstract
In the setting of software development, knowledge can be both dynamic and situation specific, and the complexity of knowledge usually exceeds the capacity of individuals to solve problems by themselves. Software developers not only require knowledge about the general security concepts but also [...] Read more.
In the setting of software development, knowledge can be both dynamic and situation specific, and the complexity of knowledge usually exceeds the capacity of individuals to solve problems by themselves. Software developers not only require knowledge about the general security concepts but also about the context for which software is being developed. With traditional security knowledge formats, which are usually organized in a security-centric way, it is difficult for knowledge users to retrieve the desired security information to fulfill the requirements of their working context. In order to effectively regulate the operation of security knowledge and be an essential part of practical software development practices, we argue that security knowledge must first incorporate additional features, that is, to first specify which contextual information is to be handled, and then represent the security knowledge in a format that is understandable and acceptable to the individuals. This study introduces a novel ontology approach for modeling security knowledge in a context-sensitive manner where the security knowledge can be retrieved while taking the context of the application in hand into consideration. In this paper, we present our security ontology with the design concepts and the evaluation process. Full article
(This article belongs to the Section Information Systems)
Open AccessArticle
Comparative Performance Evaluation of an Accuracy-Enhancing Lyapunov Solver
Information 2019, 10(6), 215; https://doi.org/10.3390/info10060215
Received: 13 May 2019 / Revised: 15 June 2019 / Accepted: 16 June 2019 / Published: 19 June 2019
Viewed by 175 | PDF Full-text (916 KB) | HTML Full-text | XML Full-text
Abstract
Lyapunov equations are key mathematical objects in systems theory, analysis and design of control systems, and in many applications, including balanced realization algorithms, procedures for reduced order models, Newton methods for algebraic Riccati equations, or stabilization algorithms. A new iterative accuracy-enhancing solver for [...] Read more.
Lyapunov equations are key mathematical objects in systems theory, analysis and design of control systems, and in many applications, including balanced realization algorithms, procedures for reduced order models, Newton methods for algebraic Riccati equations, or stabilization algorithms. A new iterative accuracy-enhancing solver for both standard and generalized continuous- and discrete-time Lyapunov equations is proposed and investigated in this paper. The underlying algorithm and some technical details are summarized. At each iteration, the computed solution of a reduced Lyapunov equation serves as a correction term to refine the current solution of the initial equation. The best available algorithms for solving Lyapunov equations with dense matrices, employing the real Schur(-triangular) form of the coefficient matrices, are used. The reduction to Schur(-triangular) form has to be done only once, before starting the iterative process. The algorithm converges in very few iterations. The results obtained by solving series of numerically difficult examples derived from the SLICOT benchmark collections for Lyapunov equations are compared to the solutions returned by the MATLAB and SLICOT solvers. The new solver can be more accurate than these state-of-the-art solvers and requires little additional computational effort. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Figures

Figure 1

Open AccessArticle
Optimal Control of Virus Spread under Different Conditions of Resources Limitations
Information 2019, 10(6), 214; https://doi.org/10.3390/info10060214
Received: 30 April 2019 / Revised: 13 June 2019 / Accepted: 18 June 2019 / Published: 19 June 2019
Viewed by 175 | PDF Full-text (415 KB) | HTML Full-text | XML Full-text
Abstract
The paper addresses the problem of human virus spread reduction when the resources for the control actions are somehow limited. This kind of problem can be successfully solved in the framework of the optimal control theory, where the best solution, which minimizes a [...] Read more.
The paper addresses the problem of human virus spread reduction when the resources for the control actions are somehow limited. This kind of problem can be successfully solved in the framework of the optimal control theory, where the best solution, which minimizes a cost function while satisfying input constraints, can be provided. The problem is formulated in this contest for the case of the HIV/AIDS virus, making use of a model that considers two classes of susceptible subjects, the wise people and the people with incautious behaviours, and three classes of infected, the ones still not aware of their status, the pre-AIDS patients and the AIDS ones; the control actions are represented by an information campaign, to reduce the category of subjects with unwise behaviour, a test campaign, to reduce the number of subjects not aware of having the virus, and the medication on patients with a positive diagnosis. The cost function considered aims at reducing patients with positive diagnosis using as less resources as possible. Four different types of resources bounds are considered, divided into two classes: limitations on the instantaneous control and fixed total budgets. The optimal solutions are numerically computed, and the results of simulations performed are illustrated and compared to put in evidence the different behaviours of the control actions. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Figures

Figure 1

Open AccessFeature PaperArticle
Optimal Resource Allocation to Reduce an Epidemic Spread and Its Complication
Information 2019, 10(6), 213; https://doi.org/10.3390/info10060213
Received: 30 April 2019 / Revised: 7 June 2019 / Accepted: 11 June 2019 / Published: 13 June 2019
Viewed by 280 | PDF Full-text (974 KB) | HTML Full-text | XML Full-text
Abstract
Mathematical modeling represents a useful instrument to describe epidemic spread and to propose useful control actions, such as vaccination scheduling, quarantine, informative campaign, and therapy, especially in the realistic hypothesis of resources limitations. Moreover, the same representation could efficiently describe different epidemic scenarios, [...] Read more.
Mathematical modeling represents a useful instrument to describe epidemic spread and to propose useful control actions, such as vaccination scheduling, quarantine, informative campaign, and therapy, especially in the realistic hypothesis of resources limitations. Moreover, the same representation could efficiently describe different epidemic scenarios, involving, for example, computer viruses spreading in the network. In this paper, a new model describing an infectious disease and a possible complication is proposed; after deep-model analysis discussing the role of the reproduction number, an optimal control problem is formulated and solved to reduce the number of dead patients, minimizing the control effort. The results show the reasonability of the proposed model and the effectiveness of the control action, aiming at an efficient resource allocation; the model also describes the different reactions of a population with respect to an epidemic disease depending on the economic and social original conditions. The optimal control theory applied to the proposed new epidemic model provides a sensible reduction in the number of dead patients, also suggesting the suitable scheduling of the vaccination control. Future work will be devoted to the identification of the model parameters referring to specific epidemic disease and complications, also taking into account the geographic and social scenario. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Figures

Figure 1

Open AccessArticle
Large Scale Linguistic Processing of Tweets to Understand Social Interactions among Speakers of Less Resourced Languages: The Basque Case
Information 2019, 10(6), 212; https://doi.org/10.3390/info10060212
Received: 30 April 2019 / Revised: 4 June 2019 / Accepted: 11 June 2019 / Published: 13 June 2019
Viewed by 287 | PDF Full-text (988 KB) | HTML Full-text | XML Full-text
Abstract
Social networks like Twitter are increasingly important in the creation of new ways of communication. They have also become useful tools for social and linguistic research due to the massive amounts of public textual data available. This is particularly important for less resourced [...] Read more.
Social networks like Twitter are increasingly important in the creation of new ways of communication. They have also become useful tools for social and linguistic research due to the massive amounts of public textual data available. This is particularly important for less resourced languages, as it allows to apply current natural language processing techniques to large amounts of unstructured data. In this work, we study the linguistic and social aspects of young and adult people’s behaviour based on their tweets’ contents and the social relations that arise from them. With this objective in mind, we have gathered over 10 million tweets from more than 8000 users. First, we classified each user in terms of its life stage (young/adult) according to the writing style of their tweets. Second, we applied topic modelling techniques to the personal tweets to find the most popular topics according to life stages. Third, we established the relations and communities that emerge based on the retweets. We conclude that using large amounts of unstructured data provided by Twitter facilitates social research using computational techniques such as natural language processing, giving the opportunity both to segment communities based on demographic characteristics and to discover how they interact or relate to them. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Figures

Figure 1

Open AccessArticle
What Message Characteristics Make Social Engineering Successful on Facebook: The Role of Central Route, Peripheral Route, and Perceived Risk
Information 2019, 10(6), 211; https://doi.org/10.3390/info10060211
Received: 14 February 2019 / Revised: 3 June 2019 / Accepted: 10 June 2019 / Published: 13 June 2019
Viewed by 223 | PDF Full-text (1500 KB) | HTML Full-text | XML Full-text
Abstract
Past research suggests that the human ability to detect social engineering deception is very limited, and it is even more limited in the virtual environment of social networking sites (SNS) such as Facebook. At the organizational level, research suggests that social engineers could [...] Read more.
Past research suggests that the human ability to detect social engineering deception is very limited, and it is even more limited in the virtual environment of social networking sites (SNS) such as Facebook. At the organizational level, research suggests that social engineers could succeed even among those organizations that identify themselves as being aware of social engineering techniques. This may be partly due to the complexity of human behaviors in failing to recognize social engineering tricks in SNSs. Due to the vital role that persuasion and perception play on users’ decision to accept or reject social engineering tricks, this paper aims to investigate the impact of message characteristics on users’ susceptibility to social engineering victimization on Facebook. In doing so, we investigate the role of the central route of persuasion, peripheral route of persuasion, and perceived risk on susceptibility to social engineering on Facebook. In addition, we investigate the mediation effects between the explored factors, and whether there is any relationship between the effectiveness of them and users’ demographics. Full article
(This article belongs to the Special Issue Insider Attacks)
Figures

Figure 1

Open AccessFeature PaperArticle
Electronic Identification for Universities: Building Cross-Border Services Based on the eIDAS Infrastructure
Information 2019, 10(6), 210; https://doi.org/10.3390/info10060210
Received: 8 April 2019 / Revised: 23 May 2019 / Accepted: 30 May 2019 / Published: 12 June 2019
Viewed by 243 | PDF Full-text (1109 KB) | HTML Full-text | XML Full-text
Abstract
The European Union (EU) Regulation 910/2014 on electronic IDentification, Authentication, and trust Services (eIDAS) for electronic transactions in the internal market went into effect on 29 September 2018, meaning that EU Member States are required to recognize the electronic identities issued in the [...] Read more.
The European Union (EU) Regulation 910/2014 on electronic IDentification, Authentication, and trust Services (eIDAS) for electronic transactions in the internal market went into effect on 29 September 2018, meaning that EU Member States are required to recognize the electronic identities issued in the countries that have notified their eID schemes. Technically speaking, a unified interoperability platform—named eIDAS infrastructure—has been set up to connect the EU countries’ national eID schemes to allow a person to authenticate in their home EU country when getting access to services provided by an eIDAS-enabled Service Provider (SP) in another EU country. The eIDAS infrastructure allows the transfer of authentication requests and responses back and forth between its nodes, transporting basic attributes about a person, e.g., name, surname, date of birth, and a so-called eIDAS identifier. However, to build new eIDAS-enabled services in specific domains, additional attributes are needed. We describe our approach to retrieve and transport new attributes through the eIDAS infrastructure, and we detail their exploitation in a selected set of academic services. First, we describe the definition and the support for the additional attributes in the eIDAS nodes. We then present a solution for their retrieval from our university. Finally, we detail the design, implementation, and installation of two eIDAS-enabled academic services at our university: the eRegistration in the Erasmus student exchange program and the Login facility with national eIDs on the university portal. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Figures

Figure 1

Open AccessArticle
An Intelligent Spam Detection Model Based on Artificial Immune System
Information 2019, 10(6), 209; https://doi.org/10.3390/info10060209
Received: 31 May 2019 / Revised: 9 June 2019 / Accepted: 9 June 2019 / Published: 12 June 2019
Viewed by 241 | PDF Full-text (2477 KB) | HTML Full-text | XML Full-text
Abstract
Spam emails, also known as non-self, are unsolicited commercial or malicious emails, sent to affect either a single individual or a corporation or a group of people. Besides advertising, these may contain links to phishing or malware hosting websites set up to steal [...] Read more.
Spam emails, also known as non-self, are unsolicited commercial or malicious emails, sent to affect either a single individual or a corporation or a group of people. Besides advertising, these may contain links to phishing or malware hosting websites set up to steal confidential information. In this paper, a study of the effectiveness of using a Negative Selection Algorithm (NSA) for anomaly detection applied to spam filtering is presented. NSA has a high performance and a low false detection rate. The designed framework intelligently works through three detection phases to finally determine an email’s legitimacy based on the knowledge gathered in the training phase. The system operates by elimination through Negative Selection similar to the functionality of T-cells’ in biological systems. It has been observed that with the inclusion of more datasets, the performance continues to improve, resulting in a 6% increase of True Positive and True Negative detection rate while achieving an actual detection rate of spam and ham of 98.5%. The model has been further compared against similar studies, and the result shows that the proposed system results in an increase of 2 to 15% in the correct detection rate of spam and ham. Full article
(This article belongs to the Special Issue Machine Learning for Cyber-Security)
Figures

Figure 1

Open AccessArticle
Latent Feature Group Learning for High-Dimensional Data Clustering
Information 2019, 10(6), 208; https://doi.org/10.3390/info10060208
Received: 1 April 2019 / Revised: 17 May 2019 / Accepted: 6 June 2019 / Published: 10 June 2019
Viewed by 235 | PDF Full-text (2608 KB)
Abstract
In this paper, we propose a latent feature group learning (LFGL) algorithm to discover the
feature grouping structures and subspace clusters for high-dimensional data. The feature grouping
structures, which are learned in an analytical way, can enhance the accuracy and efficiency of
high-dimensional [...] Read more.
In this paper, we propose a latent feature group learning (LFGL) algorithm to discover the
feature grouping structures and subspace clusters for high-dimensional data. The feature grouping
structures, which are learned in an analytical way, can enhance the accuracy and efficiency of
high-dimensional data clustering. In LFGL algorithm, the Darwinian evolutionary process is used
to explore the optimal feature grouping structures, which are coded as chromosomes in the genetic
algorithm. The feature grouping weighting k-means algorithm is used as the fitness function to
evaluate the chromosomes or feature grouping structures in each generation of evolution. To better
handle the diverse densities of clusters in high-dimensional data, the original feature grouping
weighting k-means is revised with the mass-based dissimilarity measure rather than the Euclidean
distance measure and the feature weights are optimized as a nonnegative matrix factorization
problem under the orthogonal constraint of feature weight matrix. The genetic operations of mutation
and crossover are used to generate the new chromosomes for next generation. In comparison
with the well-known clustering algorithms, LFGL algorithm produced encouraging experimental
results on real world datasets, which demonstrated the better performance of LFGL when clustering
high-dimensional data. Full article
(This article belongs to the Section Artificial Intelligence)
Open AccessArticle
Privacy-Aware MapReduce Based Multi-Party Secure Skyline Computation
Information 2019, 10(6), 207; https://doi.org/10.3390/info10060207
Received: 22 April 2019 / Revised: 4 June 2019 / Accepted: 5 June 2019 / Published: 8 June 2019
Viewed by 360 | PDF Full-text (550 KB) | HTML Full-text | XML Full-text
Abstract
Selecting representative objects from a large-scale dataset is an important task for understanding the dataset. Skyline is a popular technique for selecting representative objects from a large dataset. It is obvious that the skyline computation from the collective databases of multiple organizations is [...] Read more.
Selecting representative objects from a large-scale dataset is an important task for understanding the dataset. Skyline is a popular technique for selecting representative objects from a large dataset. It is obvious that the skyline computation from the collective databases of multiple organizations is more effective than the skyline computed from a database of a single organization. However, due to privacy-awareness, every organization is also concerned about the security and privacy of their data. In this regards, we propose an efficient multi-party secure skyline computation method that computes the skyline on encrypted data and preserves the confidentiality of each party’s database objects. Although several distributed skyline computing methods have been proposed, very few of them consider the data privacy and security issues. However, privacy-preserving multi-party skyline computing techniques are not efficient enough. In our proposed method, we present a secure computation model that is more efficient in comparison with existing privacy-preserving multi-party skyline computation models in terms of computation and communication complexity. In our computation model, we also introduce MapReduce as a distributive, scalable, open-source, cost-effective, and reliable framework to handle multi-party data efficiently. Full article
Figures

Figure 1

Open AccessArticle
Generalized Hamacher Aggregation Operators for Intuitionistic Uncertain Linguistic Sets: Multiple Attribute Group Decision Making Methods
Information 2019, 10(6), 206; https://doi.org/10.3390/info10060206
Received: 9 May 2019 / Revised: 20 May 2019 / Accepted: 4 June 2019 / Published: 8 June 2019
Viewed by 310 | PDF Full-text (861 KB)
Abstract
In this paper, we consider multiple attribute group decision making (MAGDM) problems in which the attribute values take the form of intuitionistic uncertain linguistic variables. Based on Hamacher operations, we developed several Hamacher aggregation operators, which generalize the arithmetic aggregation operators and geometric [...] Read more.
In this paper, we consider multiple attribute group decision making (MAGDM) problems in which the attribute values take the form of intuitionistic uncertain linguistic variables. Based on Hamacher operations, we developed several Hamacher aggregation operators, which generalize the arithmetic aggregation operators and geometric aggregation operators, and extend the algebraic aggregation operators and Einstein aggregation operators. A number of special cases for the two operators with respect to the parameters are discussed in detail. Also, we developed an intuitionistic uncertain linguistic generalized Hamacher hybrid weighted average operator to reflect the importance degrees of both the given intuitionistic uncertain linguistic variables and their ordered positions. Based on the generalized Hamacher aggregation operator, we propose a method for MAGDM for intuitionistic uncertain linguistic sets. Finally, a numerical example and comparative analysis with related decision making methods are provided to illustrate the practicality and feasibility of the proposed method. Full article
Open AccessFeature PaperArticle
Event Extraction and Representation: A Case Study for the Portuguese Language
Information 2019, 10(6), 205; https://doi.org/10.3390/info10060205
Received: 11 May 2019 / Revised: 4 June 2019 / Accepted: 5 June 2019 / Published: 8 June 2019
Viewed by 410 | PDF Full-text (422 KB) | HTML Full-text | XML Full-text
Abstract
Text information extraction is an important natural language processing (NLP) task, which aims to automatically identify, extract, and represent information from text. In this context, event extraction plays a relevant role, allowing actions, agents, objects, places, and time periods to be identified and [...] Read more.
Text information extraction is an important natural language processing (NLP) task, which aims to automatically identify, extract, and represent information from text. In this context, event extraction plays a relevant role, allowing actions, agents, objects, places, and time periods to be identified and represented. The extracted information can be represented by specialized ontologies, supporting knowledge-based reasoning and inference processes. In this work, we will describe, in detail, our proposal for event extraction from Portuguese documents. The proposed approach is based on a pipeline of specialized natural language processing tools; namely, a part-of-speech tagger, a named entities recognizer, a dependency parser, semantic role labeling, and a knowledge extraction module. The architecture is language-independent, but its modules are language-dependent and can be built using adequate AI (i.e., rule-based or machine learning) methodologies. The developed system was evaluated with a corpus of Portuguese texts and the obtained results are presented and analysed. The current limitations and future work are discussed in detail. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Figures

Figure 1

Open AccessArticle
Machine Vibration Monitoring for Diagnostics through Hypothesis Testing
Information 2019, 10(6), 204; https://doi.org/10.3390/info10060204
Received: 17 April 2019 / Revised: 25 May 2019 / Accepted: 26 May 2019 / Published: 7 June 2019
Viewed by 391 | PDF Full-text (7051 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Nowadays, the subject of machine diagnostics is gathering growing interest in the research field as switching from a programmed to a preventive maintenance regime based on the real health conditions (i.e., condition-based maintenance) can lead to great advantages both in terms of safety [...] Read more.
Nowadays, the subject of machine diagnostics is gathering growing interest in the research field as switching from a programmed to a preventive maintenance regime based on the real health conditions (i.e., condition-based maintenance) can lead to great advantages both in terms of safety and costs. Nondestructive tests monitoring the state of health are fundamental for this purpose. An effective form of condition monitoring is that based on vibration (vibration monitoring), which exploits inexpensive accelerometers to perform machine diagnostics. In this work, statistics and hypothesis testing will be used to build a solid foundation for damage detection by recognition of patterns in a multivariate dataset which collects simple time features extracted from accelerometric measurements. In this regard, data from high-speed aeronautical bearings were analyzed. These were acquired on a test rig built by the Dynamic and Identification Research Group (DIRG) of the Department of Mechanical and Aerospace Engineering at Politecnico di Torino. The proposed strategy was to reduce the multivariate dataset to a single index which the health conditions can be determined. This dimensionality reduction was initially performed using Principal Component Analysis, which proved to be a lossy compression. Improvement was obtained via Fisher’s Linear Discriminant Analysis, which finds the direction with maximum distance between the damaged and healthy indices. This method is still ineffective in highlighting phenomena that develop in directions orthogonal to the discriminant. Finally, a lossless compression was achieved using the Mahalanobis distance-based Novelty Indices, which was also able to compensate for possible latent confounding factors. Further, considerations about the confidence, the sensitivity, the curse of dimensionality, and the minimum number of samples were also tackled for ensuring statistical significance. The results obtained here were very good not only in terms of reduced amounts of missed and false alarms, but also considering the speed of the algorithms, their simplicity, and the full independence from human interaction, which make them suitable for real time implementation and integration in condition-based maintenance (CBM) regimes. Full article
(This article belongs to the Special Issue Fault Diagnosis, Maintenance and Reliability)
Figures

Figure 1

Open AccessArticle
Asymmetric Residual Neural Network for Accurate Human Activity Recognition
Information 2019, 10(6), 203; https://doi.org/10.3390/info10060203
Received: 21 May 2019 / Revised: 31 May 2019 / Accepted: 5 June 2019 / Published: 6 June 2019
Viewed by 337 | PDF Full-text (977 KB) | HTML Full-text | XML Full-text
Abstract
Human activity recognition (HAR) using deep neural networks has become a hot topic in human–computer interaction. Machines can effectively identify human naturalistic activities by learning from a large collection of sensor data. Activity recognition is not only an interesting research problem but also [...] Read more.
Human activity recognition (HAR) using deep neural networks has become a hot topic in human–computer interaction. Machines can effectively identify human naturalistic activities by learning from a large collection of sensor data. Activity recognition is not only an interesting research problem but also has many real-world practical applications. Based on the success of residual networks in achieving a high level of aesthetic representation of automatic learning, we propose a novel asymmetric residual network, named ARN. ARN is implemented using two identical path frameworks consisting of (1) a short time window, which is used to capture spatial features, and (2) a long time window, which is used to capture fine temporal features. The long time window path can be made very lightweight by reducing its channel capacity, while still being able to learn useful temporal representations for activity recognition. In this paper, we mainly focus on proposing a new model to improve the accuracy of HAR. In order to demonstrate the effectiveness of the ARN model, we carried out extensive experiments on benchmark datasets (i.e., OPPORTUNITY, UniMiB-SHAR) and compared the results with some conventional and state-of-the-art learning-based methods. We discuss the influence of networks parameters on performance to provide insights about its optimization. Results from our experiments show that ARN is effective in recognizing human activities via wearable datasets. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Sports)
Figures

Figure 1

Open AccessArticle
Spelling Correction of Non-Word Errors in Uyghur–Chinese Machine Translation
Information 2019, 10(6), 202; https://doi.org/10.3390/info10060202
Received: 22 April 2019 / Revised: 28 May 2019 / Accepted: 4 June 2019 / Published: 6 June 2019
Viewed by 305 | PDF Full-text (738 KB) | HTML Full-text | XML Full-text
Abstract
This research was conducted to solve the out-of-vocabulary problem caused by Uyghur spelling errors in Uyghur–Chinese machine translation, so as to improve the quality of Uyghur–Chinese machine translation. This paper assesses three spelling correction methods based on machine translation: 1. Using a Bilingual [...] Read more.
This research was conducted to solve the out-of-vocabulary problem caused by Uyghur spelling errors in Uyghur–Chinese machine translation, so as to improve the quality of Uyghur–Chinese machine translation. This paper assesses three spelling correction methods based on machine translation: 1. Using a Bilingual Evaluation Understudy (BLEU) score; 2. Using a Chinese language model; 3. Using a bilingual language model. The best results were achieved in both the spelling correction task and the machine translation task by using the BLEU score for spelling correction. A maximum F1 score of 0.72 was reached for spelling correction, and the translation result increased the BLEU score by 1.97 points, relative to the baseline system. However, the method of using a BLEU score for spelling correction requires the support of a bilingual parallel corpus, which is a supervised method that can be used in corpus pre-processing. Unsupervised spelling correction can be performed by using either a Chinese language model or a bilingual language model. These two methods can be easily extended to other languages, such as Arabic. Full article
(This article belongs to the Special Issue Natural Language Processing and Text Mining)
Figures

Figure 1

Open AccessArticle
Project Procurement Method Selection Using a Multi-Criteria Decision-Making Method with Interval Neutrosophic Sets
Information 2019, 10(6), 201; https://doi.org/10.3390/info10060201
Received: 12 April 2019 / Revised: 8 May 2019 / Accepted: 28 May 2019 / Published: 5 June 2019
Viewed by 339 | PDF Full-text (1161 KB) | HTML Full-text | XML Full-text
Abstract
Project procurement method (PPM) selection influences the efficiency of project implementation. Owners are presented with different options for project delivery. However, selecting the appropriate PPM poses great challenges to owners, given the existence of ambiguous information. The interval neutrosophic set (INS) shows power [...] Read more.
Project procurement method (PPM) selection influences the efficiency of project implementation. Owners are presented with different options for project delivery. However, selecting the appropriate PPM poses great challenges to owners, given the existence of ambiguous information. The interval neutrosophic set (INS) shows power to handle imprecise and ambiguous information. This paper aims to develop a PPM selection model under an interval neutrosophic environment for owners. The main contributions of this paper are as follows: (1) The similarity measure is innovatively introduced with interval neutrosophic information to handle the PPM selection problem. (2) The similarity measure based on minimum and maximum operators is applied to construct a decision-making model for PPM selection, through considering the truth, falsity, and indeterminacy memberships simultaneously. (3) This study establishes a PPM selection method with INS by applying similarity measures, that takes account into the determinacy, indeterminacy, and hesitation from the decision experts when giving an evaluation value. A case study on selecting PPM is made to show the applicability of the proposed approach. Finally, the results of the proposed method are compared with those of existing methods, which exhibit the superiority of the proposed PPM selection method. Full article
Figures

Figure 1

Open AccessArticle
Optimization and Security in Information Retrieval, Extraction, Processing, and Presentation on a Cloud Platform
Information 2019, 10(6), 200; https://doi.org/10.3390/info10060200
Received: 16 April 2019 / Revised: 24 May 2019 / Accepted: 4 June 2019 / Published: 5 June 2019
Viewed by 348 | PDF Full-text (1094 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents the processing steps needed in order to have a fully functional vertical search engine. Four actions are identified (i.e., retrieval, extraction, presentation, and delivery) and are required to crawl websites, get the product information from the retrieved webpages, process that [...] Read more.
This paper presents the processing steps needed in order to have a fully functional vertical search engine. Four actions are identified (i.e., retrieval, extraction, presentation, and delivery) and are required to crawl websites, get the product information from the retrieved webpages, process that data, and offer the end-user the possibility of looking for various products. The whole application flow is focused on low resource usage, and especially on the delivery action, which consists of a web application that uses cloud resources and is optimized for cost efficiency. Novel methods for representing the crawl and extraction template, for product index optimizations, and for deploying and storing data in the cloud database are identified and explained. In addition, key aspects are discussed regarding ethics and security in the proposed solution. A practical use-case scenario is also presented, where products are extracted from seven online board and card game retailers. Finally, the potential of the proposed solution is discussed in terms of researching new methods for improving various aspects of the proposed solution in order to increase cost efficiency and scalability. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Figures

Figure 1

Open AccessArticle
A Robust Automatic Ultrasound Spectral Envelope Estimation
Information 2019, 10(6), 199; https://doi.org/10.3390/info10060199
Received: 22 April 2019 / Revised: 9 May 2019 / Accepted: 16 May 2019 / Published: 5 June 2019
Viewed by 286 | PDF Full-text (1442 KB) | HTML Full-text | XML Full-text
Abstract
Accurate estimation of ultrasound Doppler spectrogram envelope is essential for clinical pathological diagnosis of various cardiovascular diseases. However, due to intrinsic spectral broadening in the power spectrum and speckle noise existing in ultrasound images, it is difficult to obtain the accurate maximum velocity. [...] Read more.
Accurate estimation of ultrasound Doppler spectrogram envelope is essential for clinical pathological diagnosis of various cardiovascular diseases. However, due to intrinsic spectral broadening in the power spectrum and speckle noise existing in ultrasound images, it is difficult to obtain the accurate maximum velocity. Each of the standard existing methods has their own limitations and does not work well in complicated recordings. This paper proposes a robust automatic spectral envelope estimation method that is more accurate in phantom recordings and various in-vivo recordings than the currently used methods. Comparisons were performed on phantom recordings of the carotid artery with varying noise and additional in-vivo recordings. The accuracy of the proposed method was on average 8% greater than the existing methods. The experimental results demonstrate the wide applicability under different blood conditions and the robustness of the proposed algorithm. Full article
Figures

Figure 1

Open AccessArticle
Investigating Users’ Continued Usage Intentions of Online Learning Applications
Information 2019, 10(6), 198; https://doi.org/10.3390/info10060198
Received: 4 May 2019 / Revised: 31 May 2019 / Accepted: 31 May 2019 / Published: 4 June 2019
Viewed by 371 | PDF Full-text (567 KB)
Abstract
Understanding users’ continued usage intentions for online learning applications is significant for online education. In this paper, we explore a scale to measure users’ usage intentions of online learning applications and empirically investigate the factors that influence users’ continued usage intentions of online [...] Read more.
Understanding users’ continued usage intentions for online learning applications is significant for online education. In this paper, we explore a scale to measure users’ usage intentions of online learning applications and empirically investigate the factors that influence users’ continued usage intentions of online learning applications based on 275 participant data. Using the extended Technology Acceptance Model (TAM) and the Structural Equation Modelling (SEM), the results show that males or users off campus are more likely to use online learning applications; that system characteristics (SC), social influence (SI), and perceived ease of use (PEOU) positively affect the perceived usefulness (PU), with coefficients of 0.74, 0.23, and 0.04, which imply that SC is the most significant to the PU of online learning applications; that facilitating conditions (FC) and individual differences (ID) positively affect the PEOU, with coefficients of 0.72 and 0.37, which suggest that FC is more important to the PEOU of online learning applications; and that both PEOU and PU positively affect the behavioral intention (BI), with coefficients of 0.83 and 0.51, which indicate that PEOU is more influential than PU to users’ continued usage intentions of online learning applications. In particular, the output quality, perceived enjoyment, and objective usability are critical to the users’ continued usage intentions of online learning applications. This study contributes to the technology acceptance research field with a fast growing market named online learning applications. Our methods and results would benefit both academics and managers with useful suggestions for research directions and user-centered strategies for the design of online learning applications. Full article
(This article belongs to the Section Information Applications)
Open AccessArticle
Multi-Sensor Activity Monitoring: Combination of Models with Class-Specific Voting
Information 2019, 10(6), 197; https://doi.org/10.3390/info10060197
Received: 20 March 2019 / Revised: 22 April 2019 / Accepted: 8 May 2019 / Published: 4 June 2019
Viewed by 322 | PDF Full-text (3020 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a multi-sensor model combination system with class-specific voting for physical activity monitoring, which combines multiple classifiers obtained by splicing sensor data from different nodes into new data frames to improve the diversity of model inputs. Data obtained from a wearable [...] Read more.
This paper presents a multi-sensor model combination system with class-specific voting for physical activity monitoring, which combines multiple classifiers obtained by splicing sensor data from different nodes into new data frames to improve the diversity of model inputs. Data obtained from a wearable multi-sensor wireless integrated measurement system (WIMS) consisting of two accelerometers and one ventilation sensor have been analysed to identify 10 different activity types of varying intensities performed by 110 voluntary participants. It is noted that each classifier shows better performance on some specific activity classes. Through class-specific weighted majority voting, the recognition accuracy of 10 PA types has been improved from 86% to 92% compared with the non-combination approach. Furthermore, the combination method has shown to be effective in reducing the subject-to-subject variability (standard deviation of recognition accuracies across subjects) in activity recognition and has better performance in monitoring physical activities of varying intensities than traditional homogeneous classifiers. Full article
(This article belongs to the Special Issue Activity Monitoring by Multiple Distributed Sensing)
Figures

Figure 1

Open AccessArticle
A Hierarchical Resource Allocation Scheme Based on Nash Bargaining Game in VANET
Information 2019, 10(6), 196; https://doi.org/10.3390/info10060196
Received: 21 March 2019 / Revised: 10 May 2019 / Accepted: 28 May 2019 / Published: 4 June 2019
Viewed by 384 | PDF Full-text (10408 KB) | HTML Full-text | XML Full-text
Abstract
Due to the selfishness of vehicles and the scarcity of spectrum resources, how to realize fair and effective spectrum resources allocation has become one of the primary tasks in VANET. In this paper, we propose a hierarchical resource allocation scheme based on Nash [...] Read more.
Due to the selfishness of vehicles and the scarcity of spectrum resources, how to realize fair and effective spectrum resources allocation has become one of the primary tasks in VANET. In this paper, we propose a hierarchical resource allocation scheme based on Nash bargaining game. Firstly, we analyze the spectrum resource allocation problem between different Road Side Units (RSUs), which obtain resources from the central cloud. Thereafter, considering the difference of vehicular users (VUEs), we construct the matching degree index between VUEs and RSUs. Then, we deal with the spectrum resource allocation problem between VUEs and RSUs. To reduce computational overhead, we transform the original problem into two sub-problems: power allocation and slot allocation, according to the time division multiplexing mechanism. The simulation results show that the proposed scheme can fairly and effectively allocate resources in VANET according to VUEs’ demand. Full article
Figures

Figure 1

Open AccessArticle
Multi-Regional Online Car-Hailing Order Quantity Forecasting Based on the Convolutional Neural Network
Information 2019, 10(6), 193; https://doi.org/10.3390/info10060193
Received: 23 April 2019 / Revised: 19 May 2019 / Accepted: 28 May 2019 / Published: 4 June 2019
Viewed by 443 | PDF Full-text (4315 KB) | HTML Full-text | XML Full-text
Abstract
With the development of online cars, the demand for travel prediction is increasing in order to reduce the information asymmetry between passengers and drivers of online car-hailing. This paper proposes a travel demand forecasting model named OC-CNN based on the convolutional neural network [...] Read more.
With the development of online cars, the demand for travel prediction is increasing in order to reduce the information asymmetry between passengers and drivers of online car-hailing. This paper proposes a travel demand forecasting model named OC-CNN based on the convolutional neural network to forecast the travel demand. In order to make full use of the spatial characteristics of the travel demand distribution, this paper meshes the prediction area and creates a travel demand data set of the graphical structure to preserve its spatial properties. Taking advantage of the convolutional neural network in image feature extraction, the historical demand data of the first twenty-five minutes of the entire region are used as a model input to predict the travel demand for the next five minutes. In order to verify the performance of the proposed method, one-month data from online car-hailing of the Chengdu Fourth Ring Road are used. The results show that the model successfully extracts the spatiotemporal features of the data, and the prediction accuracies of the proposed method are superior to those of the representative methods, including the Bayesian Ridge Model, Linear Regression, Support Vector Regression, and Long Short-Term Memory networks. Full article
(This article belongs to the Special Issue Machine Learning on Scientific Data and Information)
Figures

Figure 1

Open AccessArticle
Coupled Least Squares Support Vector Ensemble Machines
Information 2019, 10(6), 195; https://doi.org/10.3390/info10060195
Received: 3 April 2019 / Revised: 30 April 2019 / Accepted: 23 May 2019 / Published: 3 June 2019
Viewed by 331 | PDF Full-text (1123 KB) | HTML Full-text | XML Full-text
Abstract
The least squares support vector method is a popular data-driven modeling method which shows better performance and has been successfully applied in a wide range of applications. In this paper, we propose a novel coupled least squares support vector ensemble machine (C-LSSVEM). The [...] Read more.
The least squares support vector method is a popular data-driven modeling method which shows better performance and has been successfully applied in a wide range of applications. In this paper, we propose a novel coupled least squares support vector ensemble machine (C-LSSVEM). The proposed coupling ensemble helps improve robustness and produce good classification performance than the single model approach. The proposed C-LSSVEM can choose appropriate kernel types and their parameters in a good coupling strategy with a set of classifiers being trained simultaneously. The proposed method can further minimize the total loss of ensembles in kernel space. Thus, we form an ensemble regressor by co-optimizing and weighing base regressors. Experiments conducted on several datasets such as artificial datasets, UCI classification datasets, UCI regression datasets, handwritten digits datasets and NWPU-RESISC45 datasets, indicate that C-LSSVEM performs better in achieving the minimal regression loss and the best classification accuracy relative to selected state-of-the-art regression and classification techniques. Full article
Figures

Figure 1

Open AccessArticle
A Novel Improved Bat Algorithm Based on Hybrid Parallel and Compact for Balancing an Energy Consumption Problem
Information 2019, 10(6), 194; https://doi.org/10.3390/info10060194
Received: 12 April 2019 / Revised: 11 May 2019 / Accepted: 27 May 2019 / Published: 3 June 2019
Viewed by 374 | PDF Full-text (9203 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes an improved Bat algorithm based on hybridizing a parallel and compact method (namely pcBA) for a class of saving variables in optimization problems. The parallel enhances diversity solutions for exploring in space search and sharing computation load. Nevertheless, the compact [...] Read more.
This paper proposes an improved Bat algorithm based on hybridizing a parallel and compact method (namely pcBA) for a class of saving variables in optimization problems. The parallel enhances diversity solutions for exploring in space search and sharing computation load. Nevertheless, the compact saves stored variables for computation in the optimization approaches. In the experimental section, the selected benchmark functions, and the energy balance problem in Wireless sensor networks (WSN) are used to evaluate the performance of the proposed method. Results compared with the other methods in the literature demonstrate that the proposed algorithm achieves a practical method of reducing the number of stored memory variables, and the running time consumption. Full article
Figures

Figure 1

Open AccessArticle
Call Details Record Analysis: A Spatiotemporal Exploration toward Mobile Traffic Classification and Optimization
Information 2019, 10(6), 192; https://doi.org/10.3390/info10060192
Received: 29 March 2019 / Revised: 27 May 2019 / Accepted: 29 May 2019 / Published: 3 June 2019
Viewed by 369 | PDF Full-text (2374 KB) | HTML Full-text | XML Full-text
Abstract
The information contained within Call Details records (CDRs) of mobile networks can be used to study the operational efficacy of cellular networks and behavioural pattern of mobile subscribers. In this study, we extract actionable insights from the CDR data and show that there [...] Read more.
The information contained within Call Details records (CDRs) of mobile networks can be used to study the operational efficacy of cellular networks and behavioural pattern of mobile subscribers. In this study, we extract actionable insights from the CDR data and show that there exists a strong spatiotemporal predictability in real network traffic patterns. This knowledge can be leveraged by the mobile operators for effective network planning such as resource management and optimization. Motivated by this, we perform the spatiotemporal analysis of CDR data publicly available from Telecom Italia. Thus, on the basis of spatiotemporal insights, we propose a framework for mobile traffic classification. Experimental results show that the proposed model based on machine learning technique is able to accurately model and classify the network traffic patterns. Furthermore, we demonstrate the application of such insights for resource optimisation. Full article
Figures

Figure 1

Open AccessArticle
Computation Offloading Strategy in Mobile Edge Computing
Information 2019, 10(6), 191; https://doi.org/10.3390/info10060191
Received: 25 April 2019 / Revised: 13 May 2019 / Accepted: 29 May 2019 / Published: 2 June 2019
Viewed by 430 | PDF Full-text (1575 KB) | HTML Full-text | XML Full-text
Abstract
Mobile phone applications have been rapidly growing and emerging with the Internet of Things (IoT) applications in augmented reality, virtual reality, and ultra-clear video due to the development of mobile Internet services in the last three decades. These applications demand intensive computing to [...] Read more.
Mobile phone applications have been rapidly growing and emerging with the Internet of Things (IoT) applications in augmented reality, virtual reality, and ultra-clear video due to the development of mobile Internet services in the last three decades. These applications demand intensive computing to support data analysis, real-time video processing, and decision-making for optimizing the user experience. Mobile smart devices play a significant role in our daily life, and such an upward trend is continuous. Nevertheless, these devices suffer from limited resources such as CPU, memory, and energy. Computation offloading is a promising technique that can promote the lifetime and performance of smart devices by offloading local computation tasks to edge servers. In light of this situation, the strategy of computation offloading has been adopted to solve this problem. In this paper, we propose a computation offloading strategy under a scenario of multi-user and multi-mobile edge servers that considers the performance of intelligent devices and server resources. The strategy contains three main stages. In the offloading decision-making stage, the basis of offloading decision-making is put forward by considering the factors of computing task size, computing requirement, computing capacity of server, and network bandwidth. In the server selection stage, the candidate servers are evaluated comprehensively by multi-objective decision-making, and the appropriate servers are selected for the computation offloading. In the task scheduling stage, a task scheduling model based on the improved auction algorithm has been proposed by considering the time requirement of the computing tasks and the computing performance of the mobile edge computing server. Extensive simulations have demonstrated that the proposed computation offloading strategy could effectively reduce service delay and the energy consumption of intelligent devices, and improve user experience. Full article
Figures

Figure 1

Open AccessArticle
Multi-PQTable for Approximate Nearest-Neighbor Search
Information 2019, 10(6), 190; https://doi.org/10.3390/info10060190
Received: 26 April 2019 / Revised: 22 May 2019 / Accepted: 28 May 2019 / Published: 1 June 2019
Viewed by 367 | PDF Full-text (943 KB)
Abstract
Image retrieval or content-based image retrieval (CBIR) can be transformed into the calculation of the distance between image feature vectors. The closer the vectors are, the higher the image similarity will be. In the image retrieval system for large-scale dataset, the approximate nearest-neighbor [...] Read more.
Image retrieval or content-based image retrieval (CBIR) can be transformed into the calculation of the distance between image feature vectors. The closer the vectors are, the higher the image similarity will be. In the image retrieval system for large-scale dataset, the approximate nearest-neighbor (ANN) search can quickly obtain the top k images closest to the query image, which is the Top-k problem in the field of information retrieval. With the traditional ANN algorithms, such as KD-Tree, R-Tree, and M-Tree, when the dimension of the image feature vector increases, the computing time will increase exponentially due to the curse of dimensionality. In order to reduce the calculation time and improve the efficiency of image retrieval, we propose an ANN search algorithm based on the Product Quantization Table (PQTable). After quantizing and compressing the image feature vectors by the product quantization algorithm, we can construct the image index structure of the PQTable, which speeds up image retrieval. We also propose a multi-PQTable query strategy for ANN search. Besides, we generate several nearest-neighbor vectors for each sub-compressed vector of the query vector to reduce the failure rate and improve the recall in image retrieval. Through theoretical analysis and experimental verification, it is proved that the multi-PQTable query strategy and the generation of several nearest-neighbor vectors are greatly correct and efficient. Full article
Open AccessArticle
Evaluation of Sequence-Learning Models for Large-Commercial-Building Load Forecasting
Information 2019, 10(6), 189; https://doi.org/10.3390/info10060189
Received: 15 April 2019 / Revised: 16 May 2019 / Accepted: 30 May 2019 / Published: 1 June 2019
Viewed by 404 | PDF Full-text (1027 KB)
Abstract
Buildings play a critical role in the stability and resilience of modern smart grids, leading to a refocusing of large-scale energy-management strategies from the supply side to the consumer side. When buildings integrate local renewable-energy generation in the form of renewable-energy resources, they [...] Read more.
Buildings play a critical role in the stability and resilience of modern smart grids, leading to a refocusing of large-scale energy-management strategies from the supply side to the consumer side. When buildings integrate local renewable-energy generation in the form of renewable-energy resources, they become prosumers, and this adds more complexity to the operation of interconnected complex energy systems. A class of methods of modelling the energy-consumption patterns of the building have recently emerged as black-box input–output approaches with the ability to capture underlying consumption trends. These make use and require large quantities of quality data produced by nondeterministic processes underlying energy consumption. We present an application of a class of neural networks, namely, deep-learning techniques for time-series sequence modelling, with the goal of accurate and reliable building energy-load forecasting. Recurrent Neural Network implementation uses Long Short-Term Memory layers in increasing density of nodes to quantify prediction accuracy. The case study is illustrated on four university buildings from temperate climates over one year of operation using a reference benchmarking dataset that allows replicable results. The obtained results are discussed in terms of accuracy metrics and computational and network architecture aspects, and are considered suitable for further use in future in situ energy management at the building and neighborhood levels. Full article
(This article belongs to the Special Issue ICSTCC 2018: Advances in Control and Computers)
Open AccessArticle
Performance Comparing and Analysis for Slot Allocation Model
Information 2019, 10(6), 188; https://doi.org/10.3390/info10060188
Received: 2 April 2019 / Revised: 5 May 2019 / Accepted: 23 May 2019 / Published: 31 May 2019
Viewed by 399 | PDF Full-text (4230 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this study is to ascertain whether implementation difficulty can be used in a slot allocation model as a new mechanism for slightly weakening grandfather rights; according to which, a linear integer programming model is designed to compare and analyze displacement, [...] Read more.
The purpose of this study is to ascertain whether implementation difficulty can be used in a slot allocation model as a new mechanism for slightly weakening grandfather rights; according to which, a linear integer programming model is designed to compare and analyze displacement, implementation difficulty and priority with different weights. Test results show that the implementation difficulty can be significantly reduced without causing excessive displacement and disruption of existing priorities, by weight setting while declared capacity is cleared. In addition to this, whether the movements are listed in order of descending priority or not have great impact on displacement and implementation difficulty within the slot allocation model. Capacity is surely a key factor affecting displacement and implementation difficulties. This study contributes to propose a new mechanism for slightly weakening grandfather right, which can help decision makers to upgrade slot allocation policies. Full article
(This article belongs to the Special Issue Big Data Research, Development, and Applications––Big Data 2018)
Figures

Figure 1

Information EISSN 2078-2489 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top