Next Issue
Previous Issue

Table of Contents

Future Internet, Volume 11, Issue 5 (May 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The architectural semantics of information-centric networking bring in interesting features with [...] Read more.
View options order results:
result details:
Displaying articles 1-19
Export citation of selected articles as:
Open AccessArticle
Characteristics of Cyberstalking Behavior, Consequences, and Coping Strategies: A Cross-Sectional Study in a Sample of Italian University Students
Future Internet 2019, 11(5), 120; https://doi.org/10.3390/fi11050120
Received: 8 April 2019 / Revised: 15 May 2019 / Accepted: 22 May 2019 / Published: 22 May 2019
Viewed by 383 | PDF Full-text (269 KB) | HTML Full-text | XML Full-text
Abstract
Aims: The aim of this study was to compare victims of one type of cyberstalking (OneType) with victims of more than one type of cyberstalking (MoreType) regarding (1) the impact of cyberstalking and (2) attitudes related to telling someone about the experience of [...] Read more.
Aims: The aim of this study was to compare victims of one type of cyberstalking (OneType) with victims of more than one type of cyberstalking (MoreType) regarding (1) the impact of cyberstalking and (2) attitudes related to telling someone about the experience of cyberstalking and the coping strategies used by victims. Methods: A self-administered questionnaire was distributed to over 250 students at the University of Torino. Results: About half of the participants experienced at least one incident of cyberstalking. Among them, more than half experienced more than one type of cyberstalking. Victims suffered from depression more than those who had never experienced cyberstalking. No statistically significant difference emerged for anxiety. The coping strategies used by MoreType were more varied than those used by OneType victims of cyberstalking. Moreover, MoreType victims told someone about their victimization more than OneType victims. Conclusion: The work presented suggests implications for health care professionals, police officers, and government. For example, our suggestion is to pay attention to cyberstalking victims and provide flyers in schools, universities, and cafeterias that explain the risk of certain online behaviors and their consequences in physical and emotional spheres. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Open AccessArticle
The Next Generation Platform as A Service: Composition and Deployment of Platforms and Services
Future Internet 2019, 11(5), 119; https://doi.org/10.3390/fi11050119
Received: 9 April 2019 / Revised: 15 May 2019 / Accepted: 17 May 2019 / Published: 21 May 2019
Viewed by 595 | PDF Full-text (4160 KB) | HTML Full-text | XML Full-text
Abstract
The emergence of widespread cloudification and virtualisation promises increased flexibility, scalability, and programmability for the deployment of services by Vertical Service Providers (VSPs). This cloudification also improves service and network management, reducing the Capital and Operational Expenses (CAPEX, OPEX). A truly cloud-native approach [...] Read more.
The emergence of widespread cloudification and virtualisation promises increased flexibility, scalability, and programmability for the deployment of services by Vertical Service Providers (VSPs). This cloudification also improves service and network management, reducing the Capital and Operational Expenses (CAPEX, OPEX). A truly cloud-native approach is essential, since 5G will provide a diverse range of services - many requiring stringent performance guarantees while maximising flexibility and agility despite the technological diversity. This paper proposes a workflow based on the principles of build-to-order, Build-Ship-Run, and automation; following the Next Generation Platform as a Service (NGPaaS) vision. Through the concept of Reusable Functional Blocks (RFBs), an enhancement to Virtual Network Functions, this methodology allows a VSP to deploy and manage platforms and services, agnostic to the underlying technologies, protocols, and APIs. To validate the proposed workflow, a use case is also presented herein, which illustrates both the deployment of the underlying platform by the Telco operator and of the services that run on top of it. In this use case, the NGPaaS operator facilitates a VSP to provide Virtual Network Function as a Service (VNFaaS) capabilities for its end customers. Full article
Figures

Figure 1

Open AccessArticle
Intelligent Dynamic Data Offloading in a Competitive Mobile Edge Computing Market
Future Internet 2019, 11(5), 118; https://doi.org/10.3390/fi11050118
Received: 12 April 2019 / Revised: 10 May 2019 / Accepted: 13 May 2019 / Published: 21 May 2019
Viewed by 440 | PDF Full-text (1442 KB) | HTML Full-text | XML Full-text
Abstract
Software Defined Networks (SDN) and Mobile Edge Computing (MEC), capable of dynamically managing and satisfying the end-users computing demands, have emerged as key enabling technologies of 5G networks. In this paper, the joint problem of MEC server selection by the end-users and their [...] Read more.
Software Defined Networks (SDN) and Mobile Edge Computing (MEC), capable of dynamically managing and satisfying the end-users computing demands, have emerged as key enabling technologies of 5G networks. In this paper, the joint problem of MEC server selection by the end-users and their optimal data offloading, as well as the optimal price setting by the MEC servers is studied in a multiple MEC servers and multiple end-users environment. The flexibility and programmability offered by the SDN technology enables the realistic implementation of the proposed framework. Initially, an SDN controller executes a reinforcement learning framework based on the theory of stochastic learning automata towards enabling the end-users to select a MEC server to offload their data. The discount offered by the MEC server, its congestion and its penetration in terms of serving end-users’ computing tasks, and its announced pricing for its computing services are considered in the overall MEC selection process. To determine the end-users’ data offloading portion to the selected MEC server, a non-cooperative game among the end-users of each server is formulated and the existence and uniqueness of the corresponding Nash Equilibrium is shown. An optimization problem of maximizing the MEC servers’ profit is formulated and solved to determine the MEC servers’ optimal pricing with respect to their offered computing services and the received offloaded data. To realize the proposed framework, an iterative and low-complexity algorithm is introduced and designed. The performance of the proposed approach was evaluated through modeling and simulation under several scenarios, with both homogeneous and heterogeneous end-users. Full article
Figures

Figure 1

Open AccessArticle
Enhancing IoT Data Dependability through a Blockchain Mirror Model
Future Internet 2019, 11(5), 117; https://doi.org/10.3390/fi11050117
Received: 28 March 2019 / Revised: 14 May 2019 / Accepted: 15 May 2019 / Published: 21 May 2019
Viewed by 495 | PDF Full-text (2487 KB) | HTML Full-text | XML Full-text
Abstract
The Internet of Things (IoT) is a remarkable data producer and these data may be used to prevent or detect security vulnerabilities and increase productivity by the adoption of statistical and Artificial Intelligence (AI) techniques. However, these desirable benefits are gained if data [...] Read more.
The Internet of Things (IoT) is a remarkable data producer and these data may be used to prevent or detect security vulnerabilities and increase productivity by the adoption of statistical and Artificial Intelligence (AI) techniques. However, these desirable benefits are gained if data from IoT networks are dependable—this is where blockchain comes into play. In fact, through blockchain, critical IoT data may be trusted, i.e., considered valid for any subsequent processing. A simple formal model named “the Mirror Model” is proposed to connect IoT data organized in traditional models to assets of trust in a blockchain. The Mirror Model sets some formal conditions to produce trusted data that remain trusted over time. A possible practical implementation of an application programming interface (API) is proposed, which keeps the data and the trust model in synch. Finally, it is noted that the Mirror Model enforces a top-down approach from reality to implementation instead of going the opposite way as it is now the practice when referring to blockchain and the IoT. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle
Identity-as-a-Service: An Adaptive Security Infrastructure and Privacy-Preserving User Identity for the Cloud Environment
Future Internet 2019, 11(5), 116; https://doi.org/10.3390/fi11050116
Received: 12 March 2019 / Revised: 5 May 2019 / Accepted: 8 May 2019 / Published: 15 May 2019
Viewed by 543 | PDF Full-text (6074 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, enterprise applications have begun to migrate from a local hosting to a cloud provider and may have established a business-to-business relationship with each other manually. Adaptation of existing applications requires substantial implementation changes in individual architectural components. On the other [...] Read more.
In recent years, enterprise applications have begun to migrate from a local hosting to a cloud provider and may have established a business-to-business relationship with each other manually. Adaptation of existing applications requires substantial implementation changes in individual architectural components. On the other hand, users may store their Personal Identifiable Information (PII) in the cloud environment so that cloud services may access and use it on demand. Even if cloud services specify their privacy policies, we cannot guarantee that they follow their policies and will not (accidentally) transfer PII to another party. In this paper, we present Identity-as-a-Service (IDaaS) as a trusted Identity and Access Management with two requirements: Firstly, IDaaS adapts trust between cloud services on demand. We move the trust relationship and identity propagation out of the application implementation and model them as a security topology. When the business comes up with a new e-commerce scenario, IDaaS uses the security topology to adapt a platform-specific security infrastructure for the given business scenario at runtime. Secondly, we protect the confidentiality of PII in federated security domains. We propose our Purpose-based Encryption to protect the disclosure of PII from intermediary entities in a business transaction and from untrusted hosts. Our solution is compliant with the General Data Protection Regulation and involves the least user interaction to prevent identity theft via the human link. The implementation can be easily adapted to existing Identity Management systems, and the performance is fast. Full article
(This article belongs to the Special Issue Security and Privacy in Information and Communication Systems)
Figures

Figure 1

Open AccessArticle
Convolutional Two-Stream Network Using Multi-Facial Feature Fusion for Driver Fatigue Detection
Future Internet 2019, 11(5), 115; https://doi.org/10.3390/fi11050115
Received: 23 February 2019 / Revised: 19 April 2019 / Accepted: 29 April 2019 / Published: 14 May 2019
Viewed by 729 | PDF Full-text (3044 KB) | HTML Full-text | XML Full-text
Abstract
Road traffic accidents caused by fatigue driving are common causes of human casualties. In this paper, we present a driver fatigue detection algorithm using two-stream network models with multi-facial features. The algorithm consists of four parts: (1) Positioning mouth and eye with multi-task [...] Read more.
Road traffic accidents caused by fatigue driving are common causes of human casualties. In this paper, we present a driver fatigue detection algorithm using two-stream network models with multi-facial features. The algorithm consists of four parts: (1) Positioning mouth and eye with multi-task cascaded convolutional neural networks (MTCNNs). (2) Extracting the static features from a partial facial image. (3) Extracting the dynamic features from a partial facial optical flow. (4) Combining both static and dynamic features using a two-stream neural network to make the classification. The main contribution of this paper is the combination of a two-stream network and multi-facial features for driver fatigue detection. Two-stream networks can combine static and dynamic image information, while partial facial images as network inputs can focus on fatigue-related information, which brings better performance. Moreover, we applied gamma correction to enhance image contrast, which can help our method achieve better results, noted by an increased accuracy of 2% in night environments. Finally, an accuracy of 97.06% was achieved on the National Tsing Hua University Driver Drowsiness Detection (NTHU-DDD) dataset. Full article
(This article belongs to the Special Issue Special Issue on the Future of Intelligent Human-Computer Interface)
Figures

Figure 1

Open AccessArticle
Word Sense Disambiguation Using Cosine Similarity Collaborates with Word2vec and WordNet
Future Internet 2019, 11(5), 114; https://doi.org/10.3390/fi11050114
Received: 14 April 2019 / Revised: 27 April 2019 / Accepted: 10 May 2019 / Published: 12 May 2019
Viewed by 548 | PDF Full-text (994 KB) | HTML Full-text | XML Full-text
Abstract
Words have different meanings (i.e., senses) depending on the context. Disambiguating the correct sense is important and a challenging task for natural language processing. An intuitive way is to select the highest similarity between the context and sense definitions provided by a large [...] Read more.
Words have different meanings (i.e., senses) depending on the context. Disambiguating the correct sense is important and a challenging task for natural language processing. An intuitive way is to select the highest similarity between the context and sense definitions provided by a large lexical database of English, WordNet. In this database, nouns, verbs, adjectives, and adverbs are grouped into sets of cognitive synonyms interlinked through conceptual semantics and lexicon relations. Traditional unsupervised approaches compute similarity by counting overlapping words between the context and sense definitions which must match exactly. Similarity should compute based on how words are related rather than overlapping by representing the context and sense definitions on a vector space model and analyzing distributional semantic relationships among them using latent semantic analysis (LSA). When a corpus of text becomes more massive, LSA consumes much more memory and is not flexible to train a huge corpus of text. A word-embedding approach has an advantage in this issue. Word2vec is a popular word-embedding approach that represents words on a fix-sized vector space model through either the skip-gram or continuous bag-of-words (CBOW) model. Word2vec is also effectively capturing semantic and syntactic word similarities from a huge corpus of text better than LSA. Our method used Word2vec to construct a context sentence vector, and sense definition vectors then give each word sense a score using cosine similarity to compute the similarity between those sentence vectors. The sense definition also expanded with sense relations retrieved from WordNet. If the score is not higher than a specific threshold, the score will be combined with the probability of that sense distribution learned from a large sense-tagged corpus, SEMCOR. The possible answer senses can be obtained from high scores. Our method shows that the result (50.9% or 48.7% without the probability of sense distribution) is higher than the baselines (i.e., original, simplified, adapted and LSA Lesk) and outperforms many unsupervised systems participating in the SENSEVAL-3 English lexical sample task. Full article
(This article belongs to the Special Issue Big Data Analytics and Artificial Intelligence)
Figures

Figure 1

Open AccessArticle
Evaluating Forwarding Protocols in Opportunistic Networks: Trends, Advances, Challenges and Best Practices
Future Internet 2019, 11(5), 113; https://doi.org/10.3390/fi11050113
Received: 29 March 2019 / Revised: 2 May 2019 / Accepted: 4 May 2019 / Published: 11 May 2019
Viewed by 544 | PDF Full-text (801 KB) | HTML Full-text | XML Full-text
Abstract
A variety of applications and forwarding protocols have been proposed for opportunistic networks (OppNets) in the literature. However, the methodology of evaluation, testing and comparing these forwarding protocols are not standardized yet, which leads to large levels of ambiguity in performance evaluation studies. [...] Read more.
A variety of applications and forwarding protocols have been proposed for opportunistic networks (OppNets) in the literature. However, the methodology of evaluation, testing and comparing these forwarding protocols are not standardized yet, which leads to large levels of ambiguity in performance evaluation studies. Performance results depend largely on the evaluation environment, and on the used parameters and models. More comparability in evaluation scenarios and methodologies would largely improve also the availability of protocols and the repeatability of studies, and thus would accelerate the development of this research topic. In this survey paper, we focus our attention on how various OppNets data forwarding protocols are evaluated rather than what they actually achieve. We explore the models, parameters and the evaluation environments and make observations about their scalability, realism and comparability. Finally, we deduce some best practices on how to achieve the largest impact of future evaluation studies of OppNets data dissemination/forwarding protocols. Full article
Figures

Figure 1

Open AccessArticle
Substitute Seed Nodes Mining Algorithms for Influence Maximization in Multi-Social Networks
Future Internet 2019, 11(5), 112; https://doi.org/10.3390/fi11050112
Received: 15 March 2019 / Revised: 10 April 2019 / Accepted: 5 May 2019 / Published: 10 May 2019
Viewed by 457 | PDF Full-text (2220 KB) | HTML Full-text | XML Full-text
Abstract
Due to the growing interconnections of social networks, the problem of influence maximization has been extended from a single social network to multiple social networks. However, a critical challenge of influence maximization in multi-social networks is that some initial seed nodes may be [...] Read more.
Due to the growing interconnections of social networks, the problem of influence maximization has been extended from a single social network to multiple social networks. However, a critical challenge of influence maximization in multi-social networks is that some initial seed nodes may be unable to be active, which obviously leads to a low performance of influence spreading. Therefore, finding substitute nodes for mitigating the influence loss of uncooperative nodes is extremely helpful in influence maximization. In this paper, we propose three substitute mining algorithms for influence maximization in multi-social networks, namely for the Greedy-based substitute mining algorithm, pre-selected-based substitute mining algorithm, and similar-users-based substitute mining algorithm. The simulation results demonstrate that the existence of the uncooperative seed nodes leads to the range reduction of information influence. Furthermore, the viability and performance of the proposed algorithms are presented, which show that three substitute node mining algorithms can find suitable substitute nodes for multi-social networks influence maximization, thus achieves better influence. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Figures

Figure 1

Open AccessArticle
Guidelines towards Information-Driven Mobility Management
Future Internet 2019, 11(5), 111; https://doi.org/10.3390/fi11050111
Received: 23 January 2019 / Revised: 2 May 2019 / Accepted: 3 May 2019 / Published: 10 May 2019
Viewed by 619 | PDF Full-text (644 KB) | HTML Full-text | XML Full-text
Abstract
The architectural semantics of Information-Centric Networking bring in interesting features in regards to mobility management: Information-Centric Networking is content-oriented, connection-less, and receiver-driven. Despite such intrinsic advantages, the support for node movement is being based on the principles of IP solutions. IP-based solutions are, [...] Read more.
The architectural semantics of Information-Centric Networking bring in interesting features in regards to mobility management: Information-Centric Networking is content-oriented, connection-less, and receiver-driven. Despite such intrinsic advantages, the support for node movement is being based on the principles of IP solutions. IP-based solutions are, however, host-oriented, and Information-Centric Networking paradigms are information-oriented. By following IP mobility management principles, some of the natural mobility support advantages of Information-Centric Networking are not being adequately explored. This paper contributes with an overview on how Information-Centric Networking paradigms handle mobility management as of today, highlighting current challenges and proposing a set of design guidelines to overcome them, thus steering a vision towards a content-centric mobility management approach. Full article
(This article belongs to the Special Issue Information-Centric Networking (ICN))
Figures

Figure 1

Open AccessArticle
A Yielding Protocol that Uses Inter-Vehicle Communication to Improve the Traffic of Vehicles on a Low-Priority Road at an Unsignalized Intersection
Future Internet 2019, 11(5), 110; https://doi.org/10.3390/fi11050110
Received: 10 April 2019 / Revised: 5 May 2019 / Accepted: 8 May 2019 / Published: 9 May 2019
Viewed by 528 | PDF Full-text (10702 KB) | HTML Full-text | XML Full-text
Abstract
Self-driven vehicles are being actively developed. When widespread, they will help reduce the number of traffic accidents and ease traffic congestion. They will coexist with human-driven vehicles for years. If there is a mismatch between human drivers’ operations and the judgments of self-driven [...] Read more.
Self-driven vehicles are being actively developed. When widespread, they will help reduce the number of traffic accidents and ease traffic congestion. They will coexist with human-driven vehicles for years. If there is a mismatch between human drivers’ operations and the judgments of self-driven vehicles, congestion may arise at an unsignalized intersection, in particular, where roads are prioritized. Vehicles on the low-priority road attempting to cross, or turn to, the priority road can significantly reduce the traffic flow. We have proposed a yielding protocol to deal with this problem and evaluated it using a simulation that focused on traffic flow efficiency at an intersection. In the simulation, we have varied the number of vehicles coming into the roads and the percentage of self-driven vehicles and confirmed that the proposed yielding protocol could improve the traffic flow of vehicles on the low-priority road. Full article
(This article belongs to the Special Issue Advances in Internet of Vehicles (IoV))
Figures

Figure 1

Open AccessArticle
Novel Approach to Task Scheduling and Load Balancing Using the Dominant Sequence Clustering and Mean Shift Clustering Algorithms
Future Internet 2019, 11(5), 109; https://doi.org/10.3390/fi11050109
Received: 21 February 2019 / Revised: 27 March 2019 / Accepted: 3 May 2019 / Published: 8 May 2019
Viewed by 537 | PDF Full-text (1988 KB) | HTML Full-text | XML Full-text
Abstract
Cloud computing (CC) is fast-growing and frequently adopted in information technology (IT) environments due to the benefits it offers. Task scheduling and load balancing are amongst the hot topics in the realm of CC. To overcome the shortcomings of the existing task scheduling [...] Read more.
Cloud computing (CC) is fast-growing and frequently adopted in information technology (IT) environments due to the benefits it offers. Task scheduling and load balancing are amongst the hot topics in the realm of CC. To overcome the shortcomings of the existing task scheduling and load balancing approaches, we propose a novel approach that uses dominant sequence clustering (DSC) for task scheduling and a weighted least connection (WLC) algorithm for load balancing. First, users’ tasks are clustered using the DSC algorithm, which represents user tasks as graph of one or more clusters. After task clustering, each task is ranked using Modified Heterogeneous Earliest Finish Time (MHEFT) algorithm. where the highest priority task is scheduled first. Afterwards, virtual machines (VM) are clustered using a mean shift clustering (MSC) algorithm using kernel functions. Load balancing is subsequently performed using a WLC algorithm, which distributes the load based on server weight and capacity as well as client connectivity to server. A highly weighted or least connected server is selected for task allocation, which in turn increases the response time. Finally, we evaluate the proposed architecture using metrics such as response time, makespan, resource utilization, and service reliability. Full article
(This article belongs to the Special Issue Cloud Computing and Internet of Things)
Figures

Figure 1

Open AccessArticle
A Lightweight Elliptic-Elgamal-Based Authentication Scheme for Secure Device-to-Device Communication
Future Internet 2019, 11(5), 108; https://doi.org/10.3390/fi11050108
Received: 13 March 2019 / Revised: 16 April 2019 / Accepted: 26 April 2019 / Published: 7 May 2019
Viewed by 492 | PDF Full-text (1418 KB) | HTML Full-text | XML Full-text
Abstract
Device-to-Device (D2D) is a major part of 5G that will facilitate deployments with extended coverage where devices can act as users or relays. These relays normally act as decode and forward relays (semi-intelligent devices) with limited computational and storage capabilities. However, introducing such [...] Read more.
Device-to-Device (D2D) is a major part of 5G that will facilitate deployments with extended coverage where devices can act as users or relays. These relays normally act as decode and forward relays (semi-intelligent devices) with limited computational and storage capabilities. However, introducing such a technology, where users can act as relays, presents a wide range of security threats, in particular, rogue relay devices or man in the middle attacks (M-I-T-M). Second, passing fewer control messages is always advisable when considering authenticity and secrecy. To mitigate M-I-T-M and to reduce communication costs, this paper presents a lightweight elliptic-ElGamal-based authentication scheme using PKI (FHEEP) in D2D communication. Pollard’s rho and Baby Step, Giant Step (BSGS) methods are used to evaluate the authenticity and secrecy of our proposed scheme. The communication cost is calculated based on the comparative analysis indicating that our proposed scheme outperforms the baseline protocol. The proposed scheme can be used for any infrastructure architecture that will enhance the security of any D2D settings with better performance. Full article
(This article belongs to the Section Internet of Things)
Figures

Figure 1

Open AccessArticle
An Extensible Automated Failure Localization Framework Using NetKAT, Felix, and SDN Traceroute
Future Internet 2019, 11(5), 107; https://doi.org/10.3390/fi11050107
Received: 9 April 2019 / Revised: 30 April 2019 / Accepted: 1 May 2019 / Published: 4 May 2019
Viewed by 581 | PDF Full-text (417 KB) | HTML Full-text | XML Full-text
Abstract
Designing, implementing, and maintaining network policies that protect from internal and external threats is a highly non-trivial task. Often, troubleshooting networks consisting of diverse entities realizing complex policies is even harder. Software-defined networking (SDN) enables networks to adapt to changing scenarios, which significantly [...] Read more.
Designing, implementing, and maintaining network policies that protect from internal and external threats is a highly non-trivial task. Often, troubleshooting networks consisting of diverse entities realizing complex policies is even harder. Software-defined networking (SDN) enables networks to adapt to changing scenarios, which significantly lessens human effort required for constant manual modifications of device configurations. Troubleshooting benefits SDN’s method of accessing forwarding devices as well, since monitoring is made much easier via unified control channels. However, by making policy changes easier, the job of troubleshooting operators is made harder too: For humans, finding, analyzing, and fixing network issues becomes almost intractable. In this paper, we present a failure localization framework and its proof-of-concept prototype that helps in automating the investigation of network issues. Like a controller for troubleshooting tools, our framework integrates the formal specification (expected behavior) and network monitoring (actual behavior) and automatically gives hints about the location and type of network issues by comparing the two types of information. By using NetKAT (Kleene algebra with tests) for formal specification and Felix and SDN traceroute for network monitoring, we show that the integration of these tools in a single framework can significantly ease the network troubleshooting process. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Dynamic Lognormal Shadowing Framework for the Performance Evaluation of Next Generation Cellular Systems
Future Internet 2019, 11(5), 106; https://doi.org/10.3390/fi11050106
Received: 22 March 2019 / Revised: 19 April 2019 / Accepted: 29 April 2019 / Published: 2 May 2019
Viewed by 585 | PDF Full-text (1679 KB) | HTML Full-text | XML Full-text
Abstract
Performance evaluation tools for wireless cellular systems are very important for the establishment and testing of future internet applications. As the complexity of wireless networks keeps growing, wireless connectivity becomes the most critical requirement in a variety of applications (considered also complex and [...] Read more.
Performance evaluation tools for wireless cellular systems are very important for the establishment and testing of future internet applications. As the complexity of wireless networks keeps growing, wireless connectivity becomes the most critical requirement in a variety of applications (considered also complex and unfavorable from propagation point of view environments and paradigms). Nowadays, with the upcoming 5G cellular networks the development of realistic and more accurate channel model frameworks has become more important since new frequency bands are used and new architectures are employed. Large scale fading known also as shadowing, refers to the variations of the received signal mainly caused by obstructions that significantly affect the available signal power at a receiver’s position. Although the variability of shadowing is considered mostly spatial for a given propagation environment, moving obstructions may significantly impact the received signal’s strength, especially in dense environments, inducing thus a temporal variability even for the fixed users. In this paper, we present the case of lognormal shadowing, a novel engineering model based on stochastic differential equations that models not only the spatial correlation structure of shadowing but also its temporal dynamics. Based on the proposed spatio-temporal shadowing field we present a computationally efficient model for the dynamics of shadowing experienced by stationary or mobile users. We also present new analytical results for the average outage duration and hand-offs based on multi-dimensional level crossings. Numerical results are also presented for the validation of the model and some important conclusions are drawn. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle
Combining Facial Expressions and Electroencephalography to Enhance Emotion Recognition
Future Internet 2019, 11(5), 105; https://doi.org/10.3390/fi11050105
Received: 28 February 2019 / Revised: 10 April 2019 / Accepted: 16 April 2019 / Published: 2 May 2019
Viewed by 639 | PDF Full-text (3376 KB) | HTML Full-text | XML Full-text
Abstract
Emotion recognition plays an essential role in human–computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we [...] Read more.
Emotion recognition plays an essential role in human–computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we adopted a multimodal emotion recognition framework by combining facial expression and EEG, based on a valence-arousal emotional model. For facial expression detection, we followed a transfer learning approach for multi-task convolutional neural network (CNN) architectures to detect the state of valence and arousal. For EEG detection, two learning targets (valence and arousal) were detected by different support vector machine (SVM) classifiers, separately. Finally, two decision-level fusion methods based on the enumerate weight rule or an adaptive boosting technique were used to combine facial expression and EEG. In the experiment, the subjects were instructed to watch clips designed to elicit an emotional response and then reported their emotional state. We used two emotion datasets—a Database for Emotion Analysis using Physiological Signals (DEAP) and MAHNOB-human computer interface (MAHNOB-HCI)—to evaluate our method. In addition, we also performed an online experiment to make our method more robust. We experimentally demonstrated that our method produces state-of-the-art results in terms of binary valence/arousal classification, based on DEAP and MAHNOB-HCI data sets. Besides this, for the online experiment, we achieved 69.75% accuracy for the valence space and 70.00% accuracy for the arousal space after fusion, each of which has surpassed the highest performing single modality (69.28% for the valence space and 64.00% for the arousal space). The results suggest that the combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. The novelty of this work is as follows. To begin with, we combined facial expression and EEG to improve the performance of emotion recognition. Furthermore, we used transfer learning techniques to tackle the problem of lacking data and achieve higher accuracy for facial expression. Finally, in addition to implementing the widely used fusion method based on enumerating different weights between two models, we also explored a novel fusion method, applying boosting technique. Full article
(This article belongs to the Special Issue Special Issue on the Future of Intelligent Human-Computer Interface)
Figures

Figure 1

Open AccessArticle
Analysis of the Structure and Use of Digital Resources on the Websites of the Main Football Clubs in Europe
Future Internet 2019, 11(5), 104; https://doi.org/10.3390/fi11050104
Received: 24 January 2019 / Revised: 24 February 2019 / Accepted: 10 April 2019 / Published: 29 April 2019
Viewed by 570 | PDF Full-text (565 KB) | HTML Full-text | XML Full-text
Abstract
Football clubs can be considered global brands, and exactly as any other brand, they need to face the challenge of adapting to digital communications. Nevertheless, communication sciences research in this field is scarce, so the main purpose of this work is to analyze [...] Read more.
Football clubs can be considered global brands, and exactly as any other brand, they need to face the challenge of adapting to digital communications. Nevertheless, communication sciences research in this field is scarce, so the main purpose of this work is to analyze digital communication of the main football clubs in Europe to identify and describe what strategies they follow to make themselves known on the internet and to interact with their users. Specifically, the article studies the characteristics of web pages—considered as the main showcase of a brand/team in the digital environment—of the fifteen best teams in the UEFA ranking to establish what type of structure and what online communication resources they use. Through a descriptive and comparative analysis, the study concludes, among other aspects, that the management of communication is effective, but also warns that none of the analyzed team takes full advantage of the possibilities of interaction with the user offered by the digital scenario. Full article
(This article belongs to the Special Issue Future Intelligent Systems and Networks 2019)
Figures

Figure 1

Open AccessFeature PaperReview
Computational Social Science of Disasters: Opportunities and Challenges
Future Internet 2019, 11(5), 103; https://doi.org/10.3390/fi11050103
Received: 20 March 2019 / Revised: 19 April 2019 / Accepted: 23 April 2019 / Published: 26 April 2019
Viewed by 961 | PDF Full-text (1036 KB) | HTML Full-text | XML Full-text
Abstract
Disaster events and their economic impacts are trending, and climate projection studies suggest that the risks of disaster will continue to increase in the near future. Despite the broad and increasing social effects of these events, the empirical basis of disaster research is [...] Read more.
Disaster events and their economic impacts are trending, and climate projection studies suggest that the risks of disaster will continue to increase in the near future. Despite the broad and increasing social effects of these events, the empirical basis of disaster research is often weak, partially due to the natural paucity of observed data. At the same time, some of the early research regarding social responses to disasters have become outdated as social, cultural, and political norms have changed. The digital revolution, the open data trend, and the advancements in data science provide new opportunities for social science disaster research. We introduce the term computational social science of disasters (CSSD), which can be formally defined as the systematic study of the social behavioral dynamics of disasters utilizing computational methods. In this paper, we discuss and showcase the opportunities and the challenges in this new approach to disaster research. Following a brief review of the fields that relate to CSSD, namely traditional social sciences of disasters, computational social science, and crisis informatics, we examine how advances in Internet technologies offer a new lens through which to study disasters. By identifying gaps in the literature, we show how this new field could address ways to advance our understanding of the social and behavioral aspects of disasters in a digitally connected world. In doing so, our goal is to bridge the gap between data science and the social sciences of disasters in rapidly changing environments. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Graphical abstract

Open AccessArticle
Real-Time Monitoring of Passenger’s Psychological Stress
Future Internet 2019, 11(5), 102; https://doi.org/10.3390/fi11050102
Received: 29 March 2019 / Revised: 20 April 2019 / Accepted: 24 April 2019 / Published: 26 April 2019
Viewed by 598 | PDF Full-text (796 KB) | HTML Full-text | XML Full-text
Abstract
This article addresses the question of passengers’ experience through different transport modes. It presents the main results of a pilot study, for which stress levels experienced by a traveller were assessed and predicted over two long journeys. Accelerometer measures and several physiological signals [...] Read more.
This article addresses the question of passengers’ experience through different transport modes. It presents the main results of a pilot study, for which stress levels experienced by a traveller were assessed and predicted over two long journeys. Accelerometer measures and several physiological signals (electrodermal activity, blood volume pulse and skin temperature) were recorded using a smart wristband while travelling from Grenoble to Bilbao. Based on user’s feedback, three events of high stress and one period of moderate activity with low stress were identified offline. Over these periods, feature extraction and machine learning were performed from the collected sensor data to build a personalized regressive model, with user’s stress levels as output. A smartphone application has been developed on its basis, in order to record and visualize a timely estimated stress level using traveler’s physiological signals. This setting was put on test during another travel from Grenoble to Brussels, where the same user’s stress levels were predicted in real time by the smartphone application. The number of correctly classified stress-less time windows ranged from 92.6% to 100%, depending on participant’s level of activity. By design, this study represents a first step for real-life, ambulatory monitoring of passenger’s stress while travelling. Full article
(This article belongs to the Special Issue Internet of Things for Smart City Applications)
Figures

Figure 1

Future Internet EISSN 1999-5903 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top