Previous Issue

Table of Contents

Future Internet, Volume 11, Issue 1 (January 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Due to its low power consumption and long-range communication qualities, LPWAN (especially LoRa) [...] Read more.
View options order results:
result details:
Displaying articles 1-23
Export citation of selected articles as:
Open AccessFeature PaperReview Surveying Human Habit Modeling and Mining Techniques in Smart Spaces
Future Internet 2019, 11(1), 23; https://doi.org/10.3390/fi11010023 (registering DOI)
Received: 28 December 2018 / Revised: 13 January 2019 / Accepted: 16 January 2019 / Published: 19 January 2019
Viewed by 94 | PDF Full-text (1144 KB)
Abstract
A smart space is an environment, mainly equipped with Internet-of-Things (IoT) technologies, able to provide services to humans, helping them to perform daily tasks by monitoring the space and autonomously executing actions, giving suggestions and sending alarms. Approaches suggested in the literature may
[...] Read more.
A smart space is an environment, mainly equipped with Internet-of-Things (IoT) technologies, able to provide services to humans, helping them to perform daily tasks by monitoring the space and autonomously executing actions, giving suggestions and sending alarms. Approaches suggested in the literature may differ in terms of required facilities, possible applications, amount of human intervention required, ability to support multiple users at the same time adapting to changing needs. In this paper, we propose a Systematic Literature Review (SLR) that classifies most influential approaches in the area of smart spaces according to a set of dimensions identified by answering a set of research questions. These dimensions allow to choose a specific method or approach according to available sensors, amount of labeled data, need for visual analysis, requirements in terms of enactment and decision-making on the environment. Additionally, the paper identifies a set of challenges to be addressed by future research in the field. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Open AccessArticle Improved Arabic–Chinese Machine Translation with Linguistic Input Features
Future Internet 2019, 11(1), 22; https://doi.org/10.3390/fi11010022 (registering DOI)
Received: 4 December 2018 / Revised: 7 January 2019 / Accepted: 16 January 2019 / Published: 19 January 2019
Viewed by 109 | PDF Full-text (7225 KB) | HTML Full-text | XML Full-text
Abstract
This study presents linguistically augmented models of phrase-based statistical machine translation (PBSMT) using different linguistic features (factors) on the top of the source surface form. The architecture addresses two major problems occurring in machine translation, namely the poor performance of direct translation from
[...] Read more.
This study presents linguistically augmented models of phrase-based statistical machine translation (PBSMT) using different linguistic features (factors) on the top of the source surface form. The architecture addresses two major problems occurring in machine translation, namely the poor performance of direct translation from a highly-inflected and morphologically complex language into morphologically poor languages, and the data sparseness issue, which becomes a significant challenge under low-resource conditions. We use three factors (lemma, part-of-speech tags, and morphological features) to enrich the input side with additional information to improve the quality of direct translation from Arabic to Chinese, considering the importance and global presence of this language pair as well as the limitation of work on machine translation between these two languages. In an effort to deal with the issue of the out of vocabulary (OOV) words and missing words, we propose the best combination of factors and models based on alternative paths. The proposed models were compared with the standard PBSMT model which represents the baseline of this work, and two enhanced approaches tokenized by a state-of-the-art external tool that has been proven to be useful for Arabic as a morphologically rich and complex language. The experiment was performed with a Moses decoder on freely available data extracted from a multilingual corpus from United Nation documents (MultiUN). Results of a preliminary evaluation in terms of BLEU scores show that the use of linguistic features on the Arabic side considerably outperforms baseline and tokenized approaches, the system can consistently reduce the OOV rate as well. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Figures

Figure 1

Open AccessArticle An Explorative Model to Assess Individuals’ Phubbing Risk
Future Internet 2019, 11(1), 21; https://doi.org/10.3390/fi11010021
Received: 30 December 2018 / Revised: 10 January 2019 / Accepted: 11 January 2019 / Published: 18 January 2019
Viewed by 100 | PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
Phubbing could be defined as a new form of addiction; however, checking the phone and ignoring the speaker could also be linked to the increased availability of virtual social environments. We developed a multidimensional model for phubbing considering psychological dimensions and information and
[...] Read more.
Phubbing could be defined as a new form of addiction; however, checking the phone and ignoring the speaker could also be linked to the increased availability of virtual social environments. We developed a multidimensional model for phubbing considering psychological dimensions and information and communication technology related habits. We collected data through online questionnaires and surveys. The best model obtained from our data was constituted by Information and Communication Technologies’ (ICTs) usage behaviours, Trait Anxiety, Virtual Sense of Community and Neuroticism. Finally, our study confirmed a strong connection between phubbing and online addiction behaviours. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle ESCAPE: Evacuation Strategy through Clustering and Autonomous Operation in Public Safety Systems
Future Internet 2019, 11(1), 20; https://doi.org/10.3390/fi11010020
Received: 24 December 2018 / Revised: 10 January 2019 / Accepted: 16 January 2019 / Published: 17 January 2019
Viewed by 161 | PDF Full-text (906 KB) | HTML Full-text | XML Full-text
Abstract
Natural disasters and terrorist attacks pose a significant threat to human society, and have stressed an urgent need for the development of comprehensive and efficient evacuation strategies. In this paper, a novel evacuation-planning mechanism is introduced to support the distributed and autonomous evacuation
[...] Read more.
Natural disasters and terrorist attacks pose a significant threat to human society, and have stressed an urgent need for the development of comprehensive and efficient evacuation strategies. In this paper, a novel evacuation-planning mechanism is introduced to support the distributed and autonomous evacuation process within the operation of a public safety system, where the evacuees exploit the capabilities of the proposed ESCAPE service, towards making the most beneficial actions for themselves. The ESCAPE service was developed based on the principles of reinforcement learning and game theory, and is executed at two decision-making layers. Initially, evacuees are modeled as stochastic learning automata that select an evacuation route that they want to go based on its physical characteristics and past decisions during the current evacuation. Consequently, a cluster of evacuees is created per evacuation route, and the evacuees decide if they will finally evacuate through the specific evacuation route at the current time slot or not. The evacuees’ competitive behavior is modeled as a non-co-operative minority game per each specific evacuation route. A distributed and low-complexity evacuation-planning algorithm (i.e., ESCAPE) is introduced to implement both the aforementioned evacuee decision-making layers. Finally, the proposed framework is evaluated through modeling and simulation under several scenarios, and its superiority and benefits are revealed and demonstrated. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle A Reinforcement Learning Based Intercell Interference Coordination in LTE Networks
Future Internet 2019, 11(1), 19; https://doi.org/10.3390/fi11010019
Received: 6 December 2018 / Revised: 10 January 2019 / Accepted: 14 January 2019 / Published: 17 January 2019
Viewed by 121 | PDF Full-text (549 KB) | HTML Full-text | XML Full-text
Abstract
Long Term Evolution networks, which are cellular networks, are subject to many impairments due to the nature of the transmission channel used, i.e. the air. Intercell interference is the main impairment faced by Long Term Evolution networks as it uses frequency reuse one
[...] Read more.
Long Term Evolution networks, which are cellular networks, are subject to many impairments due to the nature of the transmission channel used, i.e. the air. Intercell interference is the main impairment faced by Long Term Evolution networks as it uses frequency reuse one scheme, where the whole bandwidth is used in each cell. In this paper, we propose a full dynamic intercell interference coordination scheme with no bandwidth partitioning for downlink Long Term Evolution networks. We use a reinforcement learning approach. The proposed scheme is a joint resource allocation and power allocation scheme and its purpose is to minimize intercell interference in Long Term Evolution networks. Performances of proposed scheme shows quality of service improvement in terms of SINR, packet loss and delay compared to other algorithms. Full article
Figures

Figure 1

Open AccessArticle Adaptive Downward/Upward Routing Protocol for Mobile-Sensor Networks
Future Internet 2019, 11(1), 18; https://doi.org/10.3390/fi11010018
Received: 20 November 2018 / Revised: 4 January 2019 / Accepted: 11 January 2019 / Published: 15 January 2019
Viewed by 158 | PDF Full-text (475 KB) | HTML Full-text | XML Full-text
Abstract
Recently, mobility support has become an important requirement in various Wireless Sensor Networks (WSNs). Low-power and Lossy Networks (LLNs) are a special type of WSNs that tolerate a certain degree of packet loss. However, due to the strict resource constraints in the computation,
[...] Read more.
Recently, mobility support has become an important requirement in various Wireless Sensor Networks (WSNs). Low-power and Lossy Networks (LLNs) are a special type of WSNs that tolerate a certain degree of packet loss. However, due to the strict resource constraints in the computation, energy, and memory of LLNs, most routing protocols only support static network topologies. Data collection and data dissemination are two basic traffic modes in LLNs. Unlike data collection, data dissemination is less investigated in LLNs. There are two sorts of data-dissemination methods: point-to-multipoint and point-to-point. In this paper, we focus on the point-to-point method, which requires the source node to build routes to reach the destination node. We propose an adaptive routing protocol that integrates together point-to-point traffic and data-collection traffic, and supports highly mobile scenarios. This protocol quickly reacts to the movement of nodes to make faster decisions for the next-hop selection in data collection and dynamically build routes for point-to-point traffic. Results obtained through simulation show that our work outperforms two generic ad hoc routing protocols AODV and flooding on different performance metrics. Results also show the efficiency of our work in highly mobile scenarios with multiple traffic patterns. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle Forward-Looking Element Recognition Based on the LSTM-CRF Model with the Integrity Algorithm
Future Internet 2019, 11(1), 17; https://doi.org/10.3390/fi11010017
Received: 31 October 2018 / Revised: 30 December 2018 / Accepted: 8 January 2019 / Published: 14 January 2019
Viewed by 117 | PDF Full-text (2133 KB) | HTML Full-text | XML Full-text
Abstract
A state-of-the-art entity recognition system relies on deep learning under data-driven conditions. In this paper, we combine deep learning with linguistic features and propose the long short-term memory-conditional random field model (LSTM-CRF model) with the integrity algorithm. This approach is primarily based on
[...] Read more.
A state-of-the-art entity recognition system relies on deep learning under data-driven conditions. In this paper, we combine deep learning with linguistic features and propose the long short-term memory-conditional random field model (LSTM-CRF model) with the integrity algorithm. This approach is primarily based on the use of part-of-speech (POS) syntactic rules to correct the boundaries of LSTM-CRF model annotations and improve its performance by raising the integrity of the elements. The method incorporates the advantages of the data-driven method and dependency syntax, and improves the precision rate of the elements without losing recall rate. Experiments show that the integrity algorithm is not only easy to combine with the other neural network model, but the overall effect is better than several advanced methods. In addition, we conducted cross-domain experiments based on a multi-industry corpus in the financial field. The results indicate that the method can be applied to other industries. Full article
(This article belongs to the Special Issue Innovative Topologies and Algorithms for Neural Networks)
Figures

Figure 1

Open AccessReview MAC Layer Protocols for Internet of Things: A Survey
Future Internet 2019, 11(1), 16; https://doi.org/10.3390/fi11010016
Received: 27 November 2018 / Revised: 17 December 2018 / Accepted: 18 December 2018 / Published: 14 January 2019
Viewed by 196 | PDF Full-text (2380 KB) | HTML Full-text | XML Full-text
Abstract
Due to the wide variety of uses and the diversity of features required to meet an application, Internet of Things (IoT) technologies are moving forward at a strong pace to meet this demand while at the same time trying to meet the time-to-market
[...] Read more.
Due to the wide variety of uses and the diversity of features required to meet an application, Internet of Things (IoT) technologies are moving forward at a strong pace to meet this demand while at the same time trying to meet the time-to-market of these applications. The characteristics required by applications, such as coverage area, scalability, transmission data rate, and applicability, refer to the Physical and Medium Access Control (MAC) layer designs of protocols. This paper presents a deep study of medium access control (MAC) layer protocols that are used in IoT with a detailed description of such protocols grouped (by short and long distance coverage). For short range coverage protocols, the following are considered: Radio Frequency Identification (RFID), Near Field Communication (NFC), Bluetooth IEEE 802.15.1, Bluetooth Low Energy, IEEE 802.15.4, Wireless Highway Addressable Remote Transducer Protocol (Wireless-HART), Z-Wave, Weightless, and IEEE 802.11 a/b/g/n/ah. For the long range group, Narrow Band IoT (NB-IoT), Long Term Evolution (LTE) CAT-0, LTE CAT-M, LTE CAT-N, Long Range Protocol (LoRa), and SigFox protocols are studied. A comparative study is performed for each group of protocols in order to provide insights and a reference study for IoT applications, considering their characteristics, limitations, and behavior. Open research issues on the topic are also identified. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle Multi-Topology Routing Algorithms in SDN-Based Space Information Networks
Future Internet 2019, 11(1), 15; https://doi.org/10.3390/fi11010015
Received: 2 December 2018 / Revised: 27 December 2018 / Accepted: 8 January 2019 / Published: 12 January 2019
Viewed by 190 | PDF Full-text (2597 KB) | HTML Full-text | XML Full-text
Abstract
Aiming at the complex structure of the space information networks (SIN) and the dynamic change of network topology, in order to design an efficient routing strategy, this paper establishes a SIN management architecture based on Software-defined Networking (SDN). A routing algorithm flow of
[...] Read more.
Aiming at the complex structure of the space information networks (SIN) and the dynamic change of network topology, in order to design an efficient routing strategy, this paper establishes a SIN management architecture based on Software-defined Networking (SDN). A routing algorithm flow of the spatial information network based on a snapshot sequence is designed. For different spatial tasks with different Quality of Service (QoS) requirements, the concept of integrated link weight is proposed. The Warshall–Floyd algorithm is used to design the optimal routing strategy. A Task-oriented Bandwidth Resource Allocation (TBA) algorithm is proposed for multiple spatial tasks in the same link. Simulation results show that the algorithm can effectively guarantee the priority transmission of important tasks and avoid the unnecessary waste of bandwidth resources. Full article
Figures

Figure 1

Open AccessEditorial Acknowledgement to Reviewers of Future Internet in 2018
Future Internet 2019, 11(1), 14; https://doi.org/10.3390/fi11010014
Published: 10 January 2019
Viewed by 174 | PDF Full-text (216 KB) | HTML Full-text | XML Full-text
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
Open AccessArticle A Crowdsensing Platform for Monitoring of Vehicular Emissions: A Smart City Perspective
Future Internet 2019, 11(1), 13; https://doi.org/10.3390/fi11010013
Received: 13 December 2018 / Revised: 29 December 2018 / Accepted: 3 January 2019 / Published: 8 January 2019
Viewed by 242 | PDF Full-text (1248 KB) | HTML Full-text | XML Full-text
Abstract
Historically, cities follow reactive planning models where managers make decisions as problems occur. On the other hand, the exponential growth of Information and Communication Technologies (ICT) has allowed the connection of a diverse array of sensors, devices, systems, and objects. These objects can
[...] Read more.
Historically, cities follow reactive planning models where managers make decisions as problems occur. On the other hand, the exponential growth of Information and Communication Technologies (ICT) has allowed the connection of a diverse array of sensors, devices, systems, and objects. These objects can then generate data that can be transformed into information and used in a more efficient urban planning paradigm, one that allows decisions to be made before the occurrence of problems and emergencies. Therefore, this article aims to propose a platform capable of estimating the amount of carbon dioxide based on sensor readings in vehicles, indirectly contributing to a more proactive city planning based on the monitoring of vehicular pollution. Crowdsensing techniques and an On-Board Diagnostic (OBD-II) reader are used to extract data from vehicles in real time, which are then stored locally on the devices used to perform data collection. With the performed experiments, it was possible to extract information about the operation of the vehicles and their dynamics when moving in a city, providing valuable information that can support auxiliary tools for the management of urban centers. Full article
Figures

Figure 1

Open AccessArticle Joint Uplink and Downlink Resource Allocation for D2D Communications System
Future Internet 2019, 11(1), 12; https://doi.org/10.3390/fi11010012
Received: 17 December 2018 / Revised: 31 December 2018 / Accepted: 2 January 2019 / Published: 6 January 2019
Viewed by 289 | PDF Full-text (4625 KB) | HTML Full-text | XML Full-text
Abstract
In cellular networks, device-to-device communications can increase the spectrum efficiency, but some conventional schemes only consider uplink or downlink resource allocation. In this paper, we propose the joint uplink and downlink resource allocation scheme which maximizes the system capacity and guarantees the signal-to-noise-and-interference
[...] Read more.
In cellular networks, device-to-device communications can increase the spectrum efficiency, but some conventional schemes only consider uplink or downlink resource allocation. In this paper, we propose the joint uplink and downlink resource allocation scheme which maximizes the system capacity and guarantees the signal-to-noise-and-interference ratio of both cellular users and device-to-device pairs. The optimization problem is formulated as a mixed integer nonlinear problem that is usually NP hard. To achieve the reasonable resource allocation, the optimization problem is divided into two sub-problems including power allocation and channel assignment. It is proved that the objective function of power control is a convex function, in which the optimal transmission power can be obtained. The Hungarian algorithm is developed to achieve joint uplink and downlink channel assignment. The proposed scheme can improve the system capacity performance and increase the spectrum efficiency. Numerical results reveal that the performance of the proposed scheme of jointly uplink and downlink is better than that of the schemes for independent allocation. Full article
Figures

Figure 1

Open AccessArticle Application of a Non-Immersive VR, IoT Based Approach to Help Moroccan Students Carry Out Practical Activities in a Personal Learning Style
Future Internet 2019, 11(1), 11; https://doi.org/10.3390/fi11010011
Received: 19 November 2018 / Revised: 27 December 2018 / Accepted: 28 December 2018 / Published: 4 January 2019
Viewed by 330 | PDF Full-text (1657 KB) | HTML Full-text | XML Full-text
Abstract
In the last few years, the evolution of new Information and Communication Technologies (ICT) and networks has enabled the appearance and development of several platforms and tools that serve to operate and distribute the learning content. In some particular domains, especially the scientific
[...] Read more.
In the last few years, the evolution of new Information and Communication Technologies (ICT) and networks has enabled the appearance and development of several platforms and tools that serve to operate and distribute the learning content. In some particular domains, especially the scientific one, learners need to work on practical activities, using specific products and equipment to complete, consolidate, or verify their conceptual acquisitions. However, facing the increasing number of learners in Moroccan institutions, it becomes hard and expensive for developing countries, like Morocco, to ensure the appropriate conditions for each learner to perform such activities. The majority of the suggested platforms and tools cannot solve this issue, because of their inefficiency regarding offering students good interactive practical activities. Virtual Reality (VR) and the Internet of Things (IoT), as the two most incredible technologies of the last few decades, can be used as an alternative to create a virtual environment where the learner can carry out practical activities like in the real world. In such an environment, learners interact with both virtual and physical objects. In this research paper, we propose a new approach based on VR and IoT to enhance learning by providing learners with an educational space where they can perform some practical activities. The hybrid proposed approach has been used to create a virtual environment where learners (the final year of high school) can measure ultrasonic velocity in the air. The evaluation results show that the manipulation and coupling of real objects with virtual 3D objects increases in a striking way the learning outcomes of learners, as this allows them to feel linked to the real context. Full article
(This article belongs to the Section Internet of Things)
Figures

Figure 1

Open AccessArticle THBase: A Coprocessor-Based Scheme for Big Trajectory Data Management
Future Internet 2019, 11(1), 10; https://doi.org/10.3390/fi11010010
Received: 5 November 2018 / Revised: 12 December 2018 / Accepted: 27 December 2018 / Published: 3 January 2019
Viewed by 263 | PDF Full-text (2321 KB) | HTML Full-text | XML Full-text
Abstract
The rapid development of distributed technology has made it possible to store and query massive trajectory data. As a result, a variety of schemes for big trajectory data management have been proposed. However, the factor of data transmission is not considered in most
[...] Read more.
The rapid development of distributed technology has made it possible to store and query massive trajectory data. As a result, a variety of schemes for big trajectory data management have been proposed. However, the factor of data transmission is not considered in most of these, resulting in a certain impact on query efficiency. In view of that, we present THBase, a coprocessor-based scheme for big trajectory data management in HBase. THBase introduces a segment-based data model and a moving-object-based partition model to solve massive trajectory data storage, and exploits a hybrid local secondary index structure based on Observer coprocessor to accelerate spatiotemporal queries. Furthermore, it adopts certain maintenance strategies to ensure the colocation of relevant data. Based on these, THBase designs node-locality-based parallel query algorithms by Endpoint coprocessor to reduce the overhead caused by data transmission, thus ensuring efficient query performance. Experiments on datasets of ship trajectory show that our schemes can significantly outperform other schemes. Full article
Figures

Figure 1

Open AccessArticle Object Detection Network Based on Feature Fusion and Attention Mechanism
Future Internet 2019, 11(1), 9; https://doi.org/10.3390/fi11010009
Received: 9 November 2018 / Revised: 20 December 2018 / Accepted: 25 December 2018 / Published: 2 January 2019
Viewed by 397 | PDF Full-text (2421 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, almost all of the current top-performing object detection networks use CNN (convolutional neural networks) features. State-of-the-art object detection networks depend on CNN features. In this work, we add feature fusion in the object detection network to obtain a better CNN
[...] Read more.
In recent years, almost all of the current top-performing object detection networks use CNN (convolutional neural networks) features. State-of-the-art object detection networks depend on CNN features. In this work, we add feature fusion in the object detection network to obtain a better CNN feature, which incorporates well deep, but semantic, and shallow, but high-resolution, CNN features, thus improving the performance of a small object. Also, the attention mechanism was applied to our object detection network, AF R-CNN (attention mechanism and convolution feature fusion based object detection), to enhance the impact of significant features and weaken background interference. Our AF R-CNN is a single end to end network. We choose the pre-trained network, VGG-16, to extract CNN features. Our detection network is trained on the dataset, PASCAL VOC 2007 and 2012. Empirical evaluation of the PASCAL VOC 2007 dataset demonstrates the effectiveness and improvement of our approach. Our AF R-CNN achieves an object detection accuracy of 75.9% on PASCAL VOC 2007, six points higher than Faster R-CNN. Full article
(This article belongs to the Special Issue Innovative Topologies and Algorithms for Neural Networks)
Figures

Figure 1

Open AccessArticle An Agent Based Model to Analyze the Bitcoin Mining Activity and a Comparison with the Gold Mining Industry
Future Internet 2019, 11(1), 8; https://doi.org/10.3390/fi11010008
Received: 14 November 2018 / Revised: 15 December 2018 / Accepted: 27 December 2018 / Published: 2 January 2019
Viewed by 266 | PDF Full-text (508 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present an analysis of the mining process of two popular assets, Bitcoin and gold. The analysis highlights that Bitcoin, more specifically its underlying technology, is a “safe haven” that allows facing the modern environmental challenges better than gold. Our
[...] Read more.
In this paper, we present an analysis of the mining process of two popular assets, Bitcoin and gold. The analysis highlights that Bitcoin, more specifically its underlying technology, is a “safe haven” that allows facing the modern environmental challenges better than gold. Our analysis emphasizes that crypto-currencies systems have a social and economic impact much smaller than that of the traditional financial systems. We present an analysis of the several stages needed to produce an ounce of gold and an artificial agent-based market model simulating the Bitcoin mining process and allowing the quantification of Bitcoin mining costs. In this market model, miners validate the Bitcoin transactions using the proof of work as the consensus mechanism, get a reward in Bitcoins, sell a fraction of them to cover their expenses, and stay competitive in the market by buying and divesting hardware units and adjusting their expenses by turning off/on their machines according to the signals provided by a technical analysis indicator, the so-called relative strength index. Full article
(This article belongs to the Section Smart System infrastructures and Cybersecurity)
Figures

Figure 1

Open AccessArticle Layer-Wise Compressive Training for Convolutional Neural Networks
Future Internet 2019, 11(1), 7; https://doi.org/10.3390/fi11010007
Received: 30 November 2018 / Revised: 17 December 2018 / Accepted: 22 December 2018 / Published: 28 December 2018
Viewed by 335 | PDF Full-text (737 KB) | HTML Full-text | XML Full-text
Abstract
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize patterns. Recent advances demonstrate that CNNs are able to achieve, and often exceed, human capabilities in many application domains. Made of several millions of parameters, even the simplest CNN shows large model
[...] Read more.
Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize patterns. Recent advances demonstrate that CNNs are able to achieve, and often exceed, human capabilities in many application domains. Made of several millions of parameters, even the simplest CNN shows large model size. This characteristic is a serious concern for the deployment on resource-constrained embedded-systems, where compression stages are needed to meet the stringent hardware constraints. In this paper, we introduce a novel accuracy-driven compressive training algorithm. It consists of a two-stage flow: first, layers are sorted by means of heuristic rules according to their significance; second, a modified stochastic gradient descent optimization is applied on less significant layers such that their representation is collapsed into a constrained subspace. Experimental results demonstrate that our approach achieves remarkable compression rates with low accuracy loss (<1%). Full article
(This article belongs to the Special Issue Selelcted papers from INTESA Workshop 2018)
Figures

Figure 1

Open AccessFeature PaperArticle A Framework for Improving the Engagement of Medical Practitioners in an E-Training Platform for Tuberculosis Care and Prevention
Future Internet 2019, 11(1), 6; https://doi.org/10.3390/fi11010006
Received: 26 November 2018 / Revised: 11 December 2018 / Accepted: 20 December 2018 / Published: 28 December 2018
Viewed by 386 | PDF Full-text (2469 KB) | HTML Full-text | XML Full-text
Abstract
We propose a new framework to improve the engagement of medical practitioners in a planned e-training platform for the successful identification and effective management of presumptive cases of tuberculosis (TB) in Pakistan. Our work is aligned with the World Health Organization’s TB-DOTS (Directly
[...] Read more.
We propose a new framework to improve the engagement of medical practitioners in a planned e-training platform for the successful identification and effective management of presumptive cases of tuberculosis (TB) in Pakistan. Our work is aligned with the World Health Organization’s TB-DOTS (Directly Observed Treatment Short-course) strategy for promoting the effective management of tuberculosis. We start by presenting previous work done at Mercy Corps Pakistan for training medical practitioners, then present the results of a recent survey we administered to a random sample of medical practitioners in Pakistan to learn about their requirements and readiness to embrace a new e-training platform and methodology. Informed by feedback from the survey, we formulated a detailed requirement analysis of the five key learning areas (or phases) that we believe are fundamental to the success of a TB e-training platform. Moreover, survey results revealed that an on-spot, on-demand, and competency-based learning tool can potentially improve the engagement of medical practitioners in the process. Building on the insights gained from the survey, we finally describe our initial UX (user experience) prototypes for phase 1, which corresponds to the identification of presumptive tuberculosis cases. Full article
(This article belongs to the Special Issue New Advances in E-Health)
Figures

Figure 1

Open AccessArticle Forecasting E-Commerce Products Prices by Combining an Autoregressive Integrated Moving Average (ARIMA) Model and Google Trends Data
Future Internet 2019, 11(1), 5; https://doi.org/10.3390/fi11010005
Received: 22 November 2018 / Revised: 13 December 2018 / Accepted: 20 December 2018 / Published: 24 December 2018
Viewed by 356 | PDF Full-text (778 KB) | HTML Full-text | XML Full-text
Abstract
E-commerce is becoming more and more the main instrument for selling goods to the mass market. This led to a growing interest in algorithms and techniques able to predict products future prices, since they allow us to define smart systems able to improve
[...] Read more.
E-commerce is becoming more and more the main instrument for selling goods to the mass market. This led to a growing interest in algorithms and techniques able to predict products future prices, since they allow us to define smart systems able to improve the quality of life by suggesting more affordable goods and services. The joint use of time series, reputation and sentiment analysis clearly represents one important approach to this research issue. In this paper we present Price Probe, a suite of software tools developed to perform forecasting on products’ prices. Its primary aim is to predict the future price trend of products generating a customized forecast through the exploitation of autoregressive integrated moving average (ARIMA) model. We experimented the effectiveness of the proposed approach on one of the biggest E-commerce infrastructure in the world: Amazon. We used specific APIs and dedicated crawlers to extract and collect information about products and their related prices over time and, moreover, we extracted information from social media and Google Trends that we used as exogenous features for the ARIMA model. We fine-estimated ARIMA’s parameters and tried the different combinations of the exogenous features and noticed through experimental analysis that the presence of Google Trends information significantly improved the predictions. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle A Real Case of Implementation of the Future 5G City
Future Internet 2019, 11(1), 4; https://doi.org/10.3390/fi11010004
Received: 26 November 2018 / Revised: 14 December 2018 / Accepted: 20 December 2018 / Published: 22 December 2018
Viewed by 401 | PDF Full-text (1477 KB) | HTML Full-text | XML Full-text
Abstract
The fifth generation (5G) of wireless communication systems is considered the key technology to enable a wide range of application scenarios and the effective spreading of the smart city concept. Vertical business use cases, specifically designed for the future 5G city, will have
[...] Read more.
The fifth generation (5G) of wireless communication systems is considered the key technology to enable a wide range of application scenarios and the effective spreading of the smart city concept. Vertical business use cases, specifically designed for the future 5G city, will have a strong economical and social impact. For this reason, ongoing 5G field trials have to test newly deployed technologies as well as the capability of 5G to create a new digital economy. This paper describes the 5G field trial environment that was launched in Italy at the end of 2017. The aim is to evaluate the capability of the 5G network of supporting innovative services with reference to suitably designed key performance indicators and to evaluate the opportunities offered by these services. Indeed, vertical business use cases, specifically designed for the future 5G city, with a strong economic and social impact, are under implementation and will be evaluated. In particular, the paper provides a detailed description of the deployment of an actual complete integrated 5G network. It shows how 5G is effective enabling technology for a wide range of vertical business and use cases. Indeed, its flexibility allows to satisfy completely different performance requirements of real services. Some preliminary results, obtained during the first phase, are presented for a smart mobility scenario. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle Security Risk Analysis of LoRaWAN and Future Directions
Future Internet 2019, 11(1), 3; https://doi.org/10.3390/fi11010003
Received: 20 November 2018 / Revised: 17 December 2018 / Accepted: 18 December 2018 / Published: 21 December 2018
Viewed by 394 | PDF Full-text (1111 KB) | HTML Full-text | XML Full-text
Abstract
LoRa (along with its upper layers definition—LoRaWAN) is one of the most promising Low Power Wide Area Network (LPWAN) technologies for implementing Internet of Things (IoT)-based applications. Although being a popular technology, several works in the literature have revealed vulnerabilities and risks regarding
[...] Read more.
LoRa (along with its upper layers definition—LoRaWAN) is one of the most promising Low Power Wide Area Network (LPWAN) technologies for implementing Internet of Things (IoT)-based applications. Although being a popular technology, several works in the literature have revealed vulnerabilities and risks regarding the security of LoRaWAN v1.0 (the official 1st specification draft). The LoRa-Alliance has built upon these findings and introduced several improvements in the security and architecture of LoRa. The result of these efforts resulted in LoRaWAN v1.1, released on 11 October 2017. This work aims at reviewing and clarifying the security aspects of LoRaWAN v1.1. By following ETSI guidelines, we provide a comprehensive Security Risk Analysis of the protocol and discuss several remedies to the security risks described. A threat catalog is presented, along with discussions and analysis in view of the scale, impact, and likelihood of each threat. To the best of the authors’ knowledge, this work is one of the first of its kind, by providing a detailed security risk analysis related to the latest version of LoRaWAN. Our analysis highlights important practical threats, such as end-device physical capture, rogue gateway and self-replay, which require particular attention by developers and organizations implementing LoRa networks. Full article
(This article belongs to the Special Issue IoT Security and Privacy)
Figures

Figure 1

Open AccessFeature PaperReview Harnessing machine learning for fiber-induced nonlinearity mitigation in long-haul coherent optical OFDM
Future Internet 2019, 11(1), 2; https://doi.org/10.3390/fi11010002
Received: 1 October 2018 / Revised: 14 December 2018 / Accepted: 17 December 2018 / Published: 20 December 2018
Viewed by 367 | PDF Full-text (5580 KB) | HTML Full-text | XML Full-text
Abstract
Coherent optical orthogonal frequency division multiplexing (CO-OFDM) has attracted a lot of interest in optical fiber communications due to its simplified digital signal processing (DSP) units, high spectral-efficiency, flexibility, and tolerance to linear impairments. However, CO-OFDM’s high peak-to-average power ratio imposes high vulnerability
[...] Read more.
Coherent optical orthogonal frequency division multiplexing (CO-OFDM) has attracted a lot of interest in optical fiber communications due to its simplified digital signal processing (DSP) units, high spectral-efficiency, flexibility, and tolerance to linear impairments. However, CO-OFDM’s high peak-to-average power ratio imposes high vulnerability to fiber-induced non-linearities. DSP-based machine learning has been considered as a promising approach for fiber non-linearity compensation without sacrificing computational complexity. In this paper, we review the existing machine learning approaches for CO-OFDM in a common framework and review the progress in this area with a focus on practical aspects and comparison with benchmark DSP solutions. Full article
(This article belongs to the Special Issue Recent Advances in DSP-Based Optical Communications)
Figures

Figure 1

Open AccessFeature PaperArticle Broadening Understanding on Managing the Communication Infrastructure in Vehicular Networks: Customizing the Coverage Using the Delta Network
Future Internet 2019, 11(1), 1; https://doi.org/10.3390/fi11010001
Received: 10 November 2018 / Revised: 3 December 2018 / Accepted: 5 December 2018 / Published: 20 December 2018
Viewed by 275 | PDF Full-text (1419 KB) | HTML Full-text | XML Full-text
Abstract
Over the past few decades, the growth of the urban population has been remarkable. Nowadays, 50% of the population lives in urban areas, and forecasts point that by 2050 this number will reach 70%. Today, 64% of all travel made is within urban
[...] Read more.
Over the past few decades, the growth of the urban population has been remarkable. Nowadays, 50% of the population lives in urban areas, and forecasts point that by 2050 this number will reach 70%. Today, 64% of all travel made is within urban environments and the total amount of urban kilometers traveled is expected to triple by 2050. Thus, seeking novel solutions for urban mobility becomes paramount for 21st century society. In this work, we discuss the performance of vehicular networks. We consider the metric Delta Network. The Delta Network characterizes the connectivity of the vehicular network through the percentage of travel time in which vehicles are connected to roadside units. This article reviews the concept of the Delta Network and extends its study through the presentation of a general heuristic based on the definition of scores to identify the areas of the road network that should receive coverage. After defining the general heuristic, we show how small changes in the score computation can generate very distinct (and interesting) patterns of coverage, each one suited to a given scenario. In order to exemplify such behavior, we propose three deployment strategies based on simply changing the computation of scores. We compare the proposed strategies to the intuitive strategy of allocating communication units at the most popular zones of the road network. Experiments show that the strategies derived from the general heuristic provide higher coverage than the intuitive strategy when using the same number of communication devices. Moreover, the resulting pattern of coverage is very interesting, with roadside units deployed a circle pattern around the traffic epicenter. Full article
(This article belongs to the Special Issue 5G Network Solutions for Automotive Services)
Figures

Figure 1

Future Internet EISSN 1999-5903 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top