Next Issue
Volume 13, January
Previous Issue
Volume 12, November

Future Internet, Volume 12, Issue 12 (December 2020) – 28 articles

Cover Story (view full-size image): The term Internet of Medical Things (IoMT) describes a group of medical devices enabling the collection and exchange of healthcare data. The combination of IoMT and healthcare can improve the quality of life and provide better care services. Unfortunately, today, it is still difficult for healthcare institutions to efficiently marry compliance of their systems with new medical solutions. A technology that promises to overcome the issue is Distributed Ledger Technology through its properties of decentralization, immutability, and transparency. The work from Bigini et al. aims at giving an overview of the state of the art of blockchain-based systems for the IoMT, specifically addressing the challenges of reaching user-centricity, and thus highlighting future potential directions. View this paper .
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Vulnerability Assessment of Ubiquitous Cities Using the Analytic Hierarchy Process
Future Internet 2020, 12(12), 235; https://doi.org/10.3390/fi12120235 - 21 Dec 2020
Viewed by 629
Abstract
Urbanization is a challenge faced by most countries worldwide and leads to several problems. Due to rapid communication capabilities, conforming the megacities into Ubiquitous cities (U-cities) seems to be a potential solution to mitigate the problems caused by urbanization. Extensive reliance and dependencies [...] Read more.
Urbanization is a challenge faced by most countries worldwide and leads to several problems. Due to rapid communication capabilities, conforming the megacities into Ubiquitous cities (U-cities) seems to be a potential solution to mitigate the problems caused by urbanization. Extensive reliance and dependencies of U-cities on information and communication technologies (ICTs) bring forth a new set of risks and vulnerabilities to these megacities. This research investigates the vulnerabilities of ICTs against man-made and natural hazards in a systematic way using the Analytic Hierarchy Process. The study identifies the vulnerabilities of different ICTs in U-cities and helps in improving the system’s resistivity against various hazards. The task is performed by evaluating the level of disruption on the different technologies and areas under the identified man-made and natural hazards. The research provides an insight into the working mechanisms of involved ICTs. It also helps to manage U-cities with more secure and sustainable services. The research identified that the new ICTs-based hazards have emerged and have become among the most influential hazards. The research has concluded that the vulnerabilities of U-cities are significantly different from that of conventional cities and need further studies to develop further understandings. The research recommends similar vulnerability studies for regional areas as well. Full article
(This article belongs to the Special Issue Sustainable Smart City)
Show Figures

Figure 1

Open AccessArticle
EmuCD: An Emulator for Content Dissemination Protocols in Vehicular Networks
Future Internet 2020, 12(12), 234; https://doi.org/10.3390/fi12120234 - 21 Dec 2020
Viewed by 352
Abstract
The development of protocols for mobile networks, especially for vehicular ad-hoc networks (VANETs), presents great challenges in terms of testing in real conditions. Using a production network for testing communication protocols may not be feasible, and the use of small networks does not [...] Read more.
The development of protocols for mobile networks, especially for vehicular ad-hoc networks (VANETs), presents great challenges in terms of testing in real conditions. Using a production network for testing communication protocols may not be feasible, and the use of small networks does not meet the requirements for mobility and scale found in real networks. The alternative is to use simulators and emulators, but vehicular network simulators do not meet all the requirements for effective testing. Aspects closely linked to the behaviour of the network nodes (mobility, radio communication capabilities, etc.) are particularly important in mobile networks, where a delay tolerance capability is desired. This paper proposes a distributed emulator, EmuCD, where each network node is built in a container that consumes a data trace that defines the node’s mobility and connectivity in a real network (but also allowing the use of data from simulated networks). The emulated nodes interact directly with the container’s operating system, updating the network conditions at each step of the emulation. In this way, our emulator allows the development and testing of protocols, without any relation to the emulator, whose code is directly portable to any hardware without requiring changes or customizations. Using the facilities of our emulator, we tested InterPlanetary File System (IPFS), Sprinkler and BitTorrent content dissemination protocols with real mobility and connectivity data from a real vehicular network. The tests with a real VANET and with the emulator have shown that, under similar conditions, EmuCD performs closely to the real VANET, only lacking in the finer details that are extremely hard to emulate, such as varying loads in the hardware. Full article
(This article belongs to the Special Issue Delay-Tolerant Networking)
Show Figures

Figure 1

Open AccessArticle
Jointly Optimize the Residual Energy of Multiple Mobile Devices in the MEC–WPT System
Future Internet 2020, 12(12), 233; https://doi.org/10.3390/fi12120233 - 20 Dec 2020
Viewed by 298
Abstract
With the rapid popularity of mobile devices (MDs), mobile edge computing (MEC) networks and wireless power transmission (WPT) will receive more attention. Naturally, by integrating these two technologies, the inherent energy consumption during task execution can be effectively reduced, and the collected energy [...] Read more.
With the rapid popularity of mobile devices (MDs), mobile edge computing (MEC) networks and wireless power transmission (WPT) will receive more attention. Naturally, by integrating these two technologies, the inherent energy consumption during task execution can be effectively reduced, and the collected energy can be provided to charge the MD. In this article, our research focuses on extending the battery time of MDs by maximizing the harvested energy and minimizing the consumed energy in the MEC–WPT system, which is formulated as a residual energy maximization problem and also a non-convex optimization problem. On the basis of study on maximizing the residual energy under multi-users and multi-time blocks, we propose an effective jointly optimization method (i.e., jointly optimize the energy harvesting time, task-offloading time, task-offloading size and the MDs’ CPU frequency), which combines the convex optimization method and the augmented Lagrangian to solve the residual energy maximum problem. We leverage Time Division Multiple Access (TMDA) mode to coordinate computation offloading. Simulation results show that our scheme has better performance than the benchmark schemes on maximizing residual energy. In particular, our proposed scheme is outstanding in the failure rate of multiple MDs and can adapt to the task size to minimize the failure rate. Full article
(This article belongs to the Special Issue Fog and Mobile Edge Computing)
Show Figures

Figure 1

Open AccessReview
Challenges and Possibilities of ICT-Mediated Assessment in Virtual Teaching and Learning Processes
Future Internet 2020, 12(12), 232; https://doi.org/10.3390/fi12120232 - 18 Dec 2020
Viewed by 452
Abstract
The transformations in educational environments due to the immersion of information and communication technologies (ICT) make it necessary to analyze the limits and possibilities of the assessment of the virtual training process. This paper presents an analysis of the meanings of ICT-mediated assessment, [...] Read more.
The transformations in educational environments due to the immersion of information and communication technologies (ICT) make it necessary to analyze the limits and possibilities of the assessment of the virtual training process. This paper presents an analysis of the meanings of ICT-mediated assessment, establishing what kinds of knowledge are suitable for this type of evaluation, and the challenges and possibilities of virtual tools. For this, we present a systematic review of ICT-mediated evaluation and assessment according to the educational paradigms and their implementation. We highlight that contemporary pedagogical models and their implementation in ICT mediation tools show a trend towards quantitative and summative valuation. The commonly used learning management systems (LMS) include several types of questions oriented to quantitative evaluation, with multiple-choice being the most common. However, new technological approaches like gamification, virtual reality and mobile learning open new assessment possibilities. The ICT educational platforms and new technologies demand new skills for all educational actors, such as digital literacy. Full article
(This article belongs to the Special Issue E-Learning and Technology Enhanced Learning)
Show Figures

Figure 1

Open AccessArticle
Proposal for a System Model for Offline Seismic Event Detection in Colombia
Future Internet 2020, 12(12), 231; https://doi.org/10.3390/fi12120231 - 18 Dec 2020
Viewed by 377
Abstract
This paper presents an integrated model for seismic events detection in Colombia using machine learning techniques. Machine learning is used to identify P-wave windows in historic records and hence detect seismic events. The proposed model has five modules that group the basic detection [...] Read more.
This paper presents an integrated model for seismic events detection in Colombia using machine learning techniques. Machine learning is used to identify P-wave windows in historic records and hence detect seismic events. The proposed model has five modules that group the basic detection system procedures: the seeking, gathering, and storage seismic data module, the reading of seismic records module, the analysis of seismological stations module, the sample selection module, and the classification process module. An explanation of each module is given in conjunction with practical recommendations for its implementation. The resulting model allows understanding the integration of the phases required for the design and development of an offline seismic event detection system. Full article
Show Figures

Figure 1

Open AccessArticle
IgA Nephropathy Prediction in Children with Machine Learning Algorithms
Future Internet 2020, 12(12), 230; https://doi.org/10.3390/fi12120230 - 17 Dec 2020
Viewed by 263
Abstract
Immunoglobulin A nephropathy (IgAN) is the most common primary glomerular disease all over the world and it is a major cause of renal failure. IgAN prediction in children with machine learning algorithms has been rarely studied. We retrospectively analyzed the electronic medical records [...] Read more.
Immunoglobulin A nephropathy (IgAN) is the most common primary glomerular disease all over the world and it is a major cause of renal failure. IgAN prediction in children with machine learning algorithms has been rarely studied. We retrospectively analyzed the electronic medical records from the Nanjing Eastern War Zone Hospital, chose eXtreme Gradient Boosting (XGBoost), random forest (RF), CatBoost, support vector machines (SVM), k-nearest neighbor (KNN), and extreme learning machine (ELM) models in order to predict the probability that the patient would not reach or reach end-stage renal disease (ESRD) within five years, used the chi-square test to select the most relevant 16 features as the input of the model, and designed a decision-making system (DMS) of IgAN prediction in children that is based on XGBoost and Django framework. The receiver operating characteristic (ROC) curve was used in order to evaluate the performance of the models and XGBoost had the best performance by comparison. The AUC value, accuracy, precision, recall, and f1-score of XGBoost were 85.11%, 78.60%, 75.96%, 76.70%, and 76.33%, respectively. The XGBoost model is useful for physicians and pediatric patients in providing predictions regarding IgAN. As an advantage, a DMS can be designed based on the XGBoost model to assist a physician to effectively treat IgAN in children for preventing deterioration. Full article
(This article belongs to the Special Issue Curative Power of Medical Data 2020)
Show Figures

Figure 1

Open AccessArticle
Before and after the Quarantine: An Approximate Study on the Psychological Impact of COVID-19 on the Italian Population during the Lockdown Period
Future Internet 2020, 12(12), 229; https://doi.org/10.3390/fi12120229 - 15 Dec 2020
Viewed by 330
Abstract
COVID-19 (Corona-Virus Disease 2019) in Italy and the measures that were adopted to contain its diffusion had a strong impact on people’s quality of life and mental health. The objective of the study was to quantify the psychological impact of the lockdown period [...] Read more.
COVID-19 (Corona-Virus Disease 2019) in Italy and the measures that were adopted to contain its diffusion had a strong impact on people’s quality of life and mental health. The objective of the study was to quantify the psychological impact of the lockdown period on the general Italian population during the two weeks when the COVID-19 emergency in Italy was at its peak. The study (1556 adults) was conducted from April 6th to April 12th, 2020. A survey was developed through Google Forms in order to assess different psychological measures (Self Efficacy, Locus of Control, Social Connectedness, Sense of Virtual Community, Flourishing, Positive and Negative Affect, Life Satisfaction, and Risk Propensity). The results were then compared to reference data. Thelockdown period increased arousal mainly for negative emotions, but also for positive emotions, and quality of life seemed to be reduced. From a psychosocial point of view, while social connectedness has decreased during lockdown, probably because of isolation and social distancing, the virtual social community seemed to increase in the same period. Interestingly, we revealed how self efficacy increased during the lockdown period, and, at the same time, the Locus of control appeared as externalized, and the risk propensity as reduced. The results are discussed considering previous literature, and a coherent theoretical framework is proposed in order to refine the forecasting model for the psychological impact of the lockdown. Full article
Open AccessArticle
An Analysis Method for Interpretability of CNN Text Classification Model
by and
Future Internet 2020, 12(12), 228; https://doi.org/10.3390/fi12120228 - 13 Dec 2020
Viewed by 387
Abstract
With continuous development of artificial intelligence, text classification has gradually changed from a knowledge-based method to a method based on statistics and machine learning. Among them, it is a very important and efficient way to classify text based on the convolutional neural network [...] Read more.
With continuous development of artificial intelligence, text classification has gradually changed from a knowledge-based method to a method based on statistics and machine learning. Among them, it is a very important and efficient way to classify text based on the convolutional neural network (CNN) model. Text data are a kind of sequence data, while time sequentiality of the general text data is relatively weak, so text classification is usually less relevant to the sequential structure of the full text. Therefore, CNN-based text classification has gradually become a research hotspot when dealing with issues of text classification. For machine learning, especially deep learning, model interpretability has increasingly become the focus of academic research and industrial applications, and also become a key issue for further development and application of deep learning technology. Therefore, we recommend using the backtracking analysis method to conduct in-depth research on deep learning models. This paper proposes an analysis method for interpretability of a CNN text classification model. The method proposed by us can perform multi-angle analysis on the discriminant results of multi-classified text and multi-label classification tasks through backtracking analysis on model prediction results. Finally, the analysis results of the model can be displayed using visualization technology from multiple dimensions based on interpretability. The representative data set IMDB (Internet Movie Database) in text classification is verified by examples, and the results show that the model can be effectively analyzed when using our method. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

Open AccessArticle
Performance Analysis of Cache Based on Popularity and Class in Named Data Network
Future Internet 2020, 12(12), 227; https://doi.org/10.3390/fi12120227 - 09 Dec 2020
Viewed by 394
Abstract
The communication network is growing with some unique characteristics, such as consumers repeatedly request the same content to the server, similarity in local demand trend, and dynamic changes to requests within a specific period. Therefore, a different network paradigm is needed to replace [...] Read more.
The communication network is growing with some unique characteristics, such as consumers repeatedly request the same content to the server, similarity in local demand trend, and dynamic changes to requests within a specific period. Therefore, a different network paradigm is needed to replace the IP network, namely Named Data Network (NDN). The content store, which acts as a crucial component in the NDN nodes is a limited resource. In addition, a cache mechanism is needed to optimize the router’s content store by exploiting the different content services characters in the network. This paper proposes a new caching algorithm called Cache Based on Popularity and Class (CAPIC) with dynamic mechanism, and the detail explanation about the static method also presented. The goal of Static-CAPIC was to enhance the total cache hit ratio on the network by pre-determining the cache proportion for each content class. However, this technique is not appropriate to control the cache hit ratio for priority class. Therefore, the Dynamic-CAPIC is used to provide flexibility to change the cache proportion based on the frequency of requests in real-time. The formula involves considering the consumers’ request all the time. It gives a higher cache hit ratio for the priority content class. This method outperforms Static-CAPIC, and the LCD+sharing scheme in the total network cache hit ratio parameter and channels it to the priority class. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Open AccessArticle
Role of Artificial Intelligence in Shaping Consumer Demand in E-Commerce
Future Internet 2020, 12(12), 226; https://doi.org/10.3390/fi12120226 - 08 Dec 2020
Viewed by 392
Abstract
The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for [...] Read more.
The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible. Full article
(This article belongs to the Special Issue Data Science and Knowledge Discovery)
Show Figures

Figure 1

Open AccessArticle
Smart Site Diversity for a High Throughput Satellite System with Software-Defined Networking and a Virtual Network Function
Future Internet 2020, 12(12), 225; https://doi.org/10.3390/fi12120225 - 07 Dec 2020
Viewed by 367
Abstract
High Throughput Satellite (HTS) systems aim to push data rates to the order of Terabit/s, making use of Extremely High Frequencies (EHF) or free-space optical (FSO) in the feeder links. However, one challenge that needs to be addressed is that the use of [...] Read more.
High Throughput Satellite (HTS) systems aim to push data rates to the order of Terabit/s, making use of Extremely High Frequencies (EHF) or free-space optical (FSO) in the feeder links. However, one challenge that needs to be addressed is that the use of such high frequencies makes the feeder links vulnerable to atmospheric conditions, which can effectively disable channels at times or temporarily increases the bit error rates. One way to cope with the problem is to introduce site diversity and to forward the data through the gateways not affected, or at least less constrained, by adverse conditions. In this paper, a virtual network function (VNF) introduced through reinforcement learning defines a smart routing service for an HTS system. Experiments were conducted on an emulated ground-satellite system in CloudLab, testing a VNF implementation of the approach with software-defined networking virtual switches, which indicate the expected performance of the proposed method. Full article
Show Figures

Figure 1

Open AccessArticle
Predictive Maintenance (PdM) Structure Using Internet of Things (IoT) for Mechanical Equipment Used into Hospitals in Rwanda
Future Internet 2020, 12(12), 224; https://doi.org/10.3390/fi12120224 - 07 Dec 2020
Viewed by 540
Abstract
The success of all industries relates to attaining the satisfaction to clients with a high level of services and productivity. The success main factor depends on the extent of maintaining their equipment. To date, the Rwandan hospitals that always have a long queue [...] Read more.
The success of all industries relates to attaining the satisfaction to clients with a high level of services and productivity. The success main factor depends on the extent of maintaining their equipment. To date, the Rwandan hospitals that always have a long queue of patients that are waiting for service perform a repair after failure as common maintenance practice that may involve unplanned resources, cost, time, and completely or partially interrupt the remaining hospital activities. Aiming to reduce unplanned equipment downtime and increase their reliability, this paper proposes the Predictive Maintenance (PdM) structure while using Internet of Things (IoT) in order to predict early failure before it happens for mechanical equipment that is used in Rwandan hospitals. Because prediction relies on data, the structure design consists of a simplest developed real time data collector prototype with the purpose of collecting real time data for predictive model construction and equipment health status classification. The real time data in the form of time series have been collected from selected equipment components in King Faisal Hospital and then later used to build a proposed predictive time series model to be employed in proposed structure. The Long Short Term Memory (LSTM) Neural Network model is used to learn data and perform with an accuracy of 90% and 96% to different two selected components. Full article
(This article belongs to the Special Issue Internet of Things (IoT) for Industry 4.0)
Show Figures

Figure 1

Open AccessArticle
A Model for Creating Interactive eBooks for eLearning
Future Internet 2020, 12(12), 223; https://doi.org/10.3390/fi12120223 - 07 Dec 2020
Viewed by 368
Abstract
In recent decades, electronic books have revolutionized the publishing world. In this sense, an area of application is education, where electronic books can be used as educational resources to implement learning strategies about content and in eLearning environments. For this, it is necessary [...] Read more.
In recent decades, electronic books have revolutionized the publishing world. In this sense, an area of application is education, where electronic books can be used as educational resources to implement learning strategies about content and in eLearning environments. For this, it is necessary to introduce interactive elements in the electronic books that turn the reader into an active actor in the reading process. However, ebooks have a limitation regarding their creation process. In this sense, the tools can be user-oriented or programmer-oriented. The former are intuitive to use and have user-friendly interfaces, but they offer a reduced number of functionalities to add to books. The second are aimed at programmers, allowing for the implementation of any functionality, but limiting the number of content creators who can use them. The main motivation of this work is to propose an intermediate solution that offers a wide number of functionalities while not requiring deep programming knowledge to use them. In this sense, the solution of this article is novel since it proposes the use of extensible markup language (XML) documents to specify the structure of the electronic book in such a way that its processing will lead to the electronic book. Full article
(This article belongs to the Special Issue E-Learning and Technology Enhanced Learning)
Show Figures

Figure 1

Open AccessArticle
Comparing Blockchain Standards and Recommendations
Future Internet 2020, 12(12), 222; https://doi.org/10.3390/fi12120222 - 07 Dec 2020
Viewed by 411
Abstract
Since the introduction of Bitcoin, the term “blockchain” has attracted many start-ups and companies over the years, especially in the financial sector. However, technology is evolving faster than standardization frameworks. This left the industry in the position of having to use this emerging [...] Read more.
Since the introduction of Bitcoin, the term “blockchain” has attracted many start-ups and companies over the years, especially in the financial sector. However, technology is evolving faster than standardization frameworks. This left the industry in the position of having to use this emerging technology, without being backed by any international standards organization regarding for neither the technology itself, nor for a blockchain specific information security framework. In times of the General Data Protection Regulation and growing international trade conflicts, protecting information is more relevant than ever. Standardization of blockchains is an appeal to raise the development of information technologies to the next level. Therefore, this paper shall provide an overview of standardization organization’s publications about blockchains/distributed ledger technologies, a set of comparison criteria for future work and a comparison of the existing standards work itself. With that information, aligning to existing standardization efforts becomes easier, and might even present the possibility to create frameworks where there are none at the moment. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy)
Open AccessArticle
A Bayesian Approach Based on Bayes Minimum Risk Decision for Reliability Assessment of Web Service Composition
Future Internet 2020, 12(12), 221; https://doi.org/10.3390/fi12120221 - 04 Dec 2020
Viewed by 344
Abstract
Web service composition is the process of combining and reusing existing web services to create new business processes to satisfy specific user requirements. Reliability plays an important role in ensuring the quality of web service composition. However, owing to the flexibility and complexity [...] Read more.
Web service composition is the process of combining and reusing existing web services to create new business processes to satisfy specific user requirements. Reliability plays an important role in ensuring the quality of web service composition. However, owing to the flexibility and complexity of such architecture, sufficient estimation of reliability is difficult. In this paper, the authors propose a method to estimate the reliability of web service compositions based on Bayes reliability assessment by considering it to be a decision-making problem. This improves the testing efficiency and accuracy of such methods. To this end, the authors focus on fully utilizing prior information of web services to increase the accuracy of prior distributions, and construct a Markov model in terms of the reliabilities of the web composition and each web service to integrate the limited test data. The authors further propose a method of minimum risk (MMR) to calculate the initial values of hyperparameters satisfying the constraint of minimal risk of the wrong decision. Experiments demonstrate that the proposed method is capable of efficiently utilizing prior module-level failure information, comparing with the Bayesian Monte Carlo method (BMCM) and expert scoring method (ESM), when the number of failures increased from 0 to 5, reducing the required number of test cases from 19.8% to 28.9% and 6.1% to 14.1% separately, improving the reliability assessment of web service compositions, and reducing the expenses incurred by system-level reliability testing and demonstration. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

Open AccessArticle
Empirically Investigating Extraneous Influences on the “APCO” Model—Childhood Brand Nostalgia and the Positivity Bias
Future Internet 2020, 12(12), 220; https://doi.org/10.3390/fi12120220 - 02 Dec 2020
Viewed by 361
Abstract
Pokémon Go is one of the most successful mobile games of all time. Millions played and still play this mobile augmented reality (AR) application, although severe privacy issues are pervasive in the app due to its use of several sensors such as location [...] Read more.
Pokémon Go is one of the most successful mobile games of all time. Millions played and still play this mobile augmented reality (AR) application, although severe privacy issues are pervasive in the app due to its use of several sensors such as location data and camera. In general, individuals regularly use online services and mobile apps although they might know that the use is associated with high privacy risks. This seemingly contradictory behavior of users is analyzed from a variety of different perspectives in the information systems domain. One of these perspectives evaluates privacy-related decision making processes based on concepts from behavioral economics. We follow this line of work by empirically testing one exemplary extraneous factor within the “enhanced APCO model” (antecedents–privacy concerns–outcome). Specific empirical tests on such biases are rare in the literature which is why we propose and empirically analyze the extraneous influence of a positivity bias. In our case, we hypothesize that the bias is induced by childhood brand nostalgia towards the Pokémon franchise. We analyze our proposition in the context of an online survey with 418 active players of the game. Our results indicate that childhood brand nostalgia influences the privacy calculus by exerting a large effect on the benefits within the trade-off and, therefore, causing a higher use frequency. Our work shows two important implications. First, the behavioral economics perspective on privacy provides additional insights relative to previous research. However, the effects of several other biases and heuristics have to be tested in future work. Second, relying on nostalgia represents an important, but also double-edged, instrument for practitioners to market new services and applications. Full article
Show Figures

Figure 1

Open AccessArticle
Malware Classification Based on Shallow Neural Network
Future Internet 2020, 12(12), 219; https://doi.org/10.3390/fi12120219 - 02 Dec 2020
Viewed by 368
Abstract
The emergence of a large number of new malicious code poses a serious threat to network security, and most of them are derivative versions of existing malicious code. The classification of malicious code is helpful to analyze the evolutionary trend of malicious code [...] Read more.
The emergence of a large number of new malicious code poses a serious threat to network security, and most of them are derivative versions of existing malicious code. The classification of malicious code is helpful to analyze the evolutionary trend of malicious code families and trace the source of cybercrime. The existing methods of malware classification emphasize the depth of the neural network, which has the problems of a long training time and large computational cost. In this work, we propose the shallow neural network-based malware classifier (SNNMAC), a malware classification model based on shallow neural networks and static analysis. Our approach bridges the gap between precise but slow methods and fast but less precise methods in existing works. For each sample, we first generate n-grams from their opcode sequences of the binary file with a decompiler. An improved n-gram algorithm based on control transfer instructions is designed to reduce the n-gram dataset. Then, the SNNMAC exploits a shallow neural network, replacing the full connection layer and softmax with the average pooling layer and hierarchical softmax, to learn from the dataset and perform classification. We perform experiments on the Microsoft malware dataset. The evaluation result shows that the SNNMAC outperforms most of the related works with 99.21% classification precision and reduces the training time by more than half when compared with the methods using DNN (Deep Neural Networks). Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

Open AccessArticle
Pat-in-the-Loop: Declarative Knowledge for Controlling Neural Networks
Future Internet 2020, 12(12), 218; https://doi.org/10.3390/fi12120218 - 02 Dec 2020
Viewed by 379
Abstract
The dazzling success of neural networks over natural language processing systems is imposing an urgent need to control their behavior with simpler, more direct declarative rules. In this paper, we propose Pat-in-the-Loop as a model to control a specific class of syntax-oriented neural [...] Read more.
The dazzling success of neural networks over natural language processing systems is imposing an urgent need to control their behavior with simpler, more direct declarative rules. In this paper, we propose Pat-in-the-Loop as a model to control a specific class of syntax-oriented neural networks by adding declarative rules. In Pat-in-the-Loop, distributed tree encoders allow to exploit parse trees in neural networks, heat parse trees visualize activation of parse trees, and parse subtrees are used as declarative rules in the neural network. Hence, Pat-in-the-Loop is a model to include human control in specific natural language processing (NLP)-neural network (NN) systems that exploit syntactic information, which we will generically call Pat. A pilot study on question classification showed that declarative rules representing human knowledge, injected by Pat, can be effectively used in these neural networks to ensure correctness, relevance, and cost-effective. Full article
(This article belongs to the Special Issue Nature Language Engineering: Methods, Tasks and Applications)
Show Figures

Figure 1

Open AccessArticle
Interactions between Virtual Spaces and Schools: A Collective Case Study
Future Internet 2020, 12(12), 217; https://doi.org/10.3390/fi12120217 - 02 Dec 2020
Viewed by 351
Abstract
Information and Communication Technologies (ICT) play an important role in the current educational panorama. Their presence in educational centers has increased in the last decade due to the constant development of educational policies focused on their inclusion in the classroom. However, the integration [...] Read more.
Information and Communication Technologies (ICT) play an important role in the current educational panorama. Their presence in educational centers has increased in the last decade due to the constant development of educational policies focused on their inclusion in the classroom. However, the integration of ICT in educational spaces and practices has still not reach the levels desired by international organizations such as the OECD and UNESCO. The aim of this study is to understand the nature of the interaction between virtual spaces and school spaces, and how this interaction affects the development of students’ learning processes. To this end, we follow an interpretive research methodology using a collective case study with four students from Spain. The results highlight the scarce integration of ICT into educational centers and teaching practices. However, virtual spaces play an important role in the development of other learning processes, such as informed learning, informal learning, and learning outside of the school context. In this work, we also provide a brief overview of changes in the current educational situation due to the COVID-19 pandemic. Full article
Show Figures

Figure 1

Open AccessArticle
An Extended Car-Following Model Considering Generalized Preceding Vehicles in V2X Environment
Future Internet 2020, 12(12), 216; https://doi.org/10.3390/fi12120216 - 28 Nov 2020
Viewed by 411
Abstract
Vehicle-to-everything (V2X) technology will significantly enhance the information perception ability of drivers and assist them in optimizing car-following behavior. Utilizing V2X technology, drivers could obtain motion state information of the front vehicle, non-neighboring front vehicle, and front vehicles in the adjacent lanes (these [...] Read more.
Vehicle-to-everything (V2X) technology will significantly enhance the information perception ability of drivers and assist them in optimizing car-following behavior. Utilizing V2X technology, drivers could obtain motion state information of the front vehicle, non-neighboring front vehicle, and front vehicles in the adjacent lanes (these vehicles are collectively referred to as generalized preceding vehicles in this research). However, understanding of the impact exerted by the above information on car-following behavior and traffic flow is limited. In this paper, a car-following model considering the average velocity of generalized preceding vehicles (GPV) is proposed to explore the impact and then calibrated with the next generation simulation (NGSIM) data utilizing the genetic algorithm. The neutral stability condition of the model is derived via linear stability analysis. Numerical simulation on the starting, braking and disturbance propagation process is implemented to further study features of the established model and traffic flow stability. Research results suggest that the fitting accuracy of the GPV model is 40.497% higher than the full velocity difference (FVD) model. Good agreement between the theoretical analysis and the numerical simulation reveals that motion state information of GPV can stabilize traffic flow of following vehicles and thus alleviate traffic congestion. Full article
Show Figures

Figure 1

Open AccessArticle
Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation
Future Internet 2020, 12(12), 215; https://doi.org/10.3390/fi12120215 - 27 Nov 2020
Viewed by 369
Abstract
Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the encoder and decoder of the translation model, which greatly improves the translation quality. However, [...] Read more.
Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the encoder and decoder of the translation model, which greatly improves the translation quality. However, because of a mismatch in the number of layers, the pretrained model can only initialize part of the decoder’s parameters. In this paper, we use a layer-wise coordination transformer and a consistent pretraining translation transformer instead of a vanilla transformer as the translation model. The former has only an encoder, and the latter has an encoder and a decoder, but the encoder and decoder have exactly the same parameters. Both models can guarantee that all parameters in the translation model can be initialized by the pretrained model. Experiments on the Chinese–English and English–German datasets show that compared with the vanilla transformer baseline, our models achieve better performance with fewer parameters when the parallel corpus is small. Full article
(This article belongs to the Special Issue Semantic Aspects in Natural Language Processing)
Show Figures

Figure 1

Open AccessArticle
Predicting Activities of Daily Living with Spatio-Temporal Information
Future Internet 2020, 12(12), 214; https://doi.org/10.3390/fi12120214 - 27 Nov 2020
Viewed by 314
Abstract
The smart home has begun playing an important role in supporting independent living by monitoring the activities of daily living, typically for the elderly who live alone. Activity recognition in smart homes has been studied by many researchers with much effort spent on [...] Read more.
The smart home has begun playing an important role in supporting independent living by monitoring the activities of daily living, typically for the elderly who live alone. Activity recognition in smart homes has been studied by many researchers with much effort spent on modeling user activities to predict behaviors. Most people, when performing their daily activities, interact with multiple objects both in space and through time. The interactions between user and objects in the home can provide rich contextual information in interpreting human activity. This paper shows the importance of spatial and temporal information for reasoning in smart homes and demonstrates how such information is represented for activity recognition. Evaluation was conducted on three publicly available smart-home datasets. Our method achieved an average recognition accuracy of more than 81% when predicting user activities given the spatial and temporal information. Full article
(This article belongs to the Special Issue Data Science and Knowledge Discovery)
Show Figures

Figure 1

Open AccessArticle
Complex Cognitive Systems and Their Unconscious. Related Inspired Conjectures for Artificial Intelligence
Future Internet 2020, 12(12), 213; https://doi.org/10.3390/fi12120213 - 27 Nov 2020
Viewed by 311
Abstract
The aim of the article is to propose a conceptual framework, constructs, and conjectures to act as a guide for future, related research finalized to design and implement versions of Artificial Intelligence encompassing an artificially simulated unconscious suitable for human-like artificial cognitive processing. [...] Read more.
The aim of the article is to propose a conceptual framework, constructs, and conjectures to act as a guide for future, related research finalized to design and implement versions of Artificial Intelligence encompassing an artificially simulated unconscious suitable for human-like artificial cognitive processing. This article considers the concept of the unconscious in psychoanalysis. The interdisciplinary understanding of this concept is considered to be the unavoidable property of sufficiently complex, cognitive processing. We elaborate on the possibility of an artificial unconscious, able to both self-acquired properties through usage, and self-profile through a supposed implicit, parasitic usage of explicit cognitive processing. Memory activities are considered to be integrated into cognitive processing, with memory no longer only being storage and reminding no longer only being finding. We elaborate on the artificial unconscious as an implicit, usage-dependent, self-profiling, and emergent process. Conceptual characteristics of the research project are the implementation of weighted networked, fuzzified memorizations; self-generated networks of links of inter-relationships as nodes, self-variation of the intensity of the links according to the use, and activation of internal self-processes such as the introduction of fictitious links intended as variations and combinations of the current ones. Application examples suitable for experimental implementation are also discussed with reference to chatbot technology that has been extended with features of an artificial unconscious. Thus, we introduce the concept of the AU-chatbot. The main purpose is to allow the artificial cognitive processing to acquire suitable human-like attitudes in representing, interfacing, and learning, potentially important in supporting and complementing human-centered activities. Examples of expected features are the ability to combine current and unconscious links to perform cognitive processing such as representing, deciding, memorizing, and solving equivalencies, and also learning meta-profiles, such as in supporting doctor–patient interactions and educational activities. We also discuss possible technologies suitable for implementing experiments for the artificial unconscious. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Open AccessTechnical Note
About Rule-Based Systems: Single Database Queries for Decision Making
Future Internet 2020, 12(12), 212; https://doi.org/10.3390/fi12120212 - 27 Nov 2020
Viewed by 410
Abstract
One of the developmental directions of Future Internet technologies is the implementation of artificial intelligence systems for manipulating data and the surrounding world in a more complex way. Rule-based systems, very accessible for people’s decision-making, play an important role in the family of [...] Read more.
One of the developmental directions of Future Internet technologies is the implementation of artificial intelligence systems for manipulating data and the surrounding world in a more complex way. Rule-based systems, very accessible for people’s decision-making, play an important role in the family of computational intelligence methods. The use of decision-making rules along with decision trees are one of the simplest forms of presenting complex decision-making processes. Decision support systems, according to the cross-industry standard process for data mining (CRISP-DM) framework, require final embedding of the learned model in a given computer infrastructure, integrated circuits, etc. In this work, we deal with the topic concerning placing the learned rule-based model of decision support in the database environment-exactly in the SQL database tables. Our main goal is to place the previously trained model in the database and apply it by means of single queries. In our work we assume that the decision-making rules applied are mutually consistent and additionally the Minimal Description Length (MDL) rule is introduced. We propose a universal solution for any IF THEN rule induction algorithm. Full article
(This article belongs to the Special Issue Data Science and Knowledge Discovery)
Show Figures

Figure 1

Open AccessArticle
Risk Perceptions on Social Media Use in Norway
Future Internet 2020, 12(12), 211; https://doi.org/10.3390/fi12120211 - 26 Nov 2020
Viewed by 352
Abstract
Social media are getting more and more ingrained into everybody’s lives. With people’s more substantial presence on social media, threat actors exploit the platforms and the information that people share there to deploy and execute various types of attacks. This paper focuses on [...] Read more.
Social media are getting more and more ingrained into everybody’s lives. With people’s more substantial presence on social media, threat actors exploit the platforms and the information that people share there to deploy and execute various types of attacks. This paper focuses on the Norwegian population, exploring how people perceive risks arising from the use of social media, focusing on the analysis of specific indicators such as age, sexes and differences among the users of distinct social media platforms. For data collection, a questionnaire was structured and deployed towards the users of multiple social media platforms (total n = 329). The analysis compares risk perceptions of using the social media platforms Facebook (n = 288), Twitter (n = 134), Reddit (n = 189) and Snapchat (n = 267). Furthermore, the paper analyses the differences between the sexes and between the digital natives and non-natives. Our sample also includes sufferers of ID theft (n = 50). We analyse how account compromise occurs and how suffering ID theft changes behaviour and perception. The results show significant discrepancies in the risk perception among the social media platform users across the examined indicators, but also explicit variations on how this affects the associated usage patterns. Based on the results, we propose a generic risk ranking of social media platforms, activities, sharing and a threat model for SoMe users. The results show the lack of a unified perception of risk on social media, indicating the need for targeted security awareness enhancement mechanisms focusing on this topic. Full article
(This article belongs to the Special Issue Security and Privacy in Social Networks and Solutions)
Show Figures

Figure 1

Open AccessArticle
Citizen Science on Twitter: Using Data Analytics to Understand Conversations and Networks
Future Internet 2020, 12(12), 210; https://doi.org/10.3390/fi12120210 - 26 Nov 2020
Viewed by 524
Abstract
This paper presents a long-term study on how the public engage with discussions around citizen science and crowdsourcing topics. With progress in sensor technologies and IoT, our cities and neighbourhoods are increasingly sensed, measured and observed. While such data are often used to [...] Read more.
This paper presents a long-term study on how the public engage with discussions around citizen science and crowdsourcing topics. With progress in sensor technologies and IoT, our cities and neighbourhoods are increasingly sensed, measured and observed. While such data are often used to inform citizen science projects, it is still difficult to understand how citizens and communities discuss citizen science activities and engage with citizen science projects. Understanding these engagements in greater depth will provide citizen scientists, project owners, practitioners and the generic public with insights around how social media can be used to share citizen science related topics, particularly to help increase visibility, influence change and in general and raise awareness on topics. To the knowledge of the authors, this is the first large-scale study on understanding how such information is discussed on Twitter, particularly outside the scope of individual projects. The paper reports on the wide variety of topics (e.g., politics, news, ecological observations) being discussed on social media and a wide variety of network types and the varied roles played by users in sharing information in Twitter. Based on these findings, the paper highlights recommendations for stakeholders for engaging with citizen science topics. Full article
(This article belongs to the Special Issue AI and IoT technologies in Smart Cities)
Show Figures

Figure 1

Open AccessArticle
User Acceptance of Information Feed Advertising: A Hybrid Method Based on SEM and QCA
by and
Future Internet 2020, 12(12), 209; https://doi.org/10.3390/fi12120209 - 26 Nov 2020
Viewed by 381
Abstract
It is of great significance for enterprises’ development to effectively use mobile Internet to carry out information feed advertising. This paper aims to study the influence factors and effect of the users’ acceptance intention of information feed advertising through empirical analysis to provide [...] Read more.
It is of great significance for enterprises’ development to effectively use mobile Internet to carry out information feed advertising. This paper aims to study the influence factors and effect of the users’ acceptance intention of information feed advertising through empirical analysis to provide references for further optimizing information feed advertising strategy. Traditional quantitative analysis methods, such as the Structural Equation Model (SEM), can only measure a single factor’s influence from an individual perspective. Therefore, we introduce the Qualitative Comparative Analysis (QCA) and present a two-stage hybrid method based on SEM and QCA. In the first stage, we analyze the influence of a single variable on the acceptance intention of information feed advertising by SEM. Then, in the second stage, we analyze the impact of different variable combinations by QCA. Based on the actual questionnaire data, we define six independent variables and use AMOS, SPSS, and fsQCA to carry out SEM analysis and the fuzzy-set-based QCA analysis, respectively. The SEM analysis results show that the four factors (including consistency, informativeness, sociability, and advertising reward) have a significant positive impact on user acceptance of information feed advertising. On the contrary, perceived advertising clustering has a significant negative impact. In addition, accuracy has no considerable effect. The QCA analysis reveals that seven constructs of six variables can all significantly enhance information feed ads’ acceptance intention. Full article
(This article belongs to the Special Issue Network Economics and Utility Maximization)
Show Figures

Figure 1

Open AccessReview
A Review on Blockchain for the Internet of Medical Things: Definitions, Challenges, Applications, and Vision
Future Internet 2020, 12(12), 208; https://doi.org/10.3390/fi12120208 - 25 Nov 2020
Viewed by 612
Abstract
Nowadays, there are a lot of new mobile devices that have the potential to assist healthcare professionals when working and help to increase the well-being of the people. These devices comprise the Internet of Medical Things, but it is generally difficult for healthcare [...] Read more.
Nowadays, there are a lot of new mobile devices that have the potential to assist healthcare professionals when working and help to increase the well-being of the people. These devices comprise the Internet of Medical Things, but it is generally difficult for healthcare institutions to meet compliance of their systems with new medical solutions efficiently. A technology that promises the sharing of data in a trust-less scenario is the Distributed Ledger Technology through its properties of decentralization, immutability, and transparency. The Blockchain and the Internet of Medical Things can be considered as at an early stage, and the implementations successfully applying the technology are not so many. Some aspects covered by these implementations are data sharing, interoperability of systems, security of devices, the opportunity of data monetization and data ownership that will be the focus of this review. This work aims at giving an overview of the current state-of-the-art of the Blockchain-based systems for the Internet of Medical Things, specifically addressing the challenges of reaching user-centricity for these combined systems, and thus highlighting the potential future directions to follow for full ownership of data by users. Full article
(This article belongs to the Special Issue The Future Internet of Medical Things)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop