Next Issue
Previous Issue

Table of Contents

Future Internet, Volume 11, Issue 2 (February 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), and Vehicle-to-Everything (V2X) [...] Read more.
View options order results:
result details:
Displaying articles 1-28
Export citation of selected articles as:
Open AccessArticle Embedded Deep Learning for Ship Detection and Recognition
Future Internet 2019, 11(2), 53; https://doi.org/10.3390/fi11020053
Received: 31 December 2018 / Revised: 27 January 2019 / Accepted: 31 January 2019 / Published: 21 February 2019
Viewed by 384 | PDF Full-text (1807 KB) | HTML Full-text | XML Full-text
Abstract
Ship detection and recognition are important for smart monitoring of ships in order to manage port resources effectively. However, this is challenging due to complex ship profiles, ship background, object occlusion, variations of weather and light conditions, and other issues. It is also [...] Read more.
Ship detection and recognition are important for smart monitoring of ships in order to manage port resources effectively. However, this is challenging due to complex ship profiles, ship background, object occlusion, variations of weather and light conditions, and other issues. It is also expensive to transmit monitoring video in a whole, especially if the port is not in a rural area. In this paper, we propose an on-site processing approach, which is called Embedded Ship Detection and Recognition using Deep Learning (ESDR-DL). In ESDR-DL, the video stream is processed using embedded devices, and we design a two-stage neural network named DCNet, which is composed of a DNet for ship detection and a CNet for ship recognition, running on embedded devices. We have extensively evaluated ESDR-DL, including performance of accuracy and efficiency. The ESDR-DL is deployed at the Dongying port of China, which has been running for over a year and demonstrates that it can work reliably for practical usage. Full article
(This article belongs to the Special Issue Innovative Topologies and Algorithms for Neural Networks)
Figures

Figure 1

Open AccessArticle Sentiment Analysis Based Requirement Evolution Prediction
Future Internet 2019, 11(2), 52; https://doi.org/10.3390/fi11020052
Received: 12 January 2019 / Revised: 2 February 2019 / Accepted: 11 February 2019 / Published: 21 February 2019
Viewed by 358 | PDF Full-text (796 KB) | HTML Full-text | XML Full-text
Abstract
To facilitate product developers capturing the varying requirements from users to support their feature evolution process, requirements evolution prediction from massive review texts is in fact of great importance. The proposed framework combines a supervised deep learning neural network with an unsupervised hierarchical [...] Read more.
To facilitate product developers capturing the varying requirements from users to support their feature evolution process, requirements evolution prediction from massive review texts is in fact of great importance. The proposed framework combines a supervised deep learning neural network with an unsupervised hierarchical topic model to analyze user reviews automatically for product feature requirements evolution prediction. The approach is to discover hierarchical product feature requirements from the hierarchical topic model and to identify their sentiment by the Long Short-term Memory (LSTM) with word embedding, which not only models hierarchical product requirement features from general to specific, but also identifies sentiment orientation to better correspond to the different hierarchies of product features. The evaluation and experimental results show that the proposed approach is effective and feasible. Full article
(This article belongs to the Special Issue Future Intelligent Systems and Networks 2019)
Figures

Figure 1

Open AccessArticle A Fusion Load Disaggregation Method Based on Clustering Algorithm and Support Vector Regression Optimization for Low Sampling Data
Future Internet 2019, 11(2), 51; https://doi.org/10.3390/fi11020051
Received: 11 December 2018 / Revised: 17 January 2019 / Accepted: 21 January 2019 / Published: 19 February 2019
Viewed by 415 | PDF Full-text (1124 KB) | HTML Full-text | XML Full-text
Abstract
In order to achieve more efficient energy consumption, it is crucial that accurate detailed information is given on how power is consumed. Electricity details benefit both market utilities and also power consumers. Non-intrusive load monitoring (NILM), a novel and economic technology, obtains single-appliance [...] Read more.
In order to achieve more efficient energy consumption, it is crucial that accurate detailed information is given on how power is consumed. Electricity details benefit both market utilities and also power consumers. Non-intrusive load monitoring (NILM), a novel and economic technology, obtains single-appliance power consumption through a single total power meter. This paper, focusing on load disaggregation with low hardware costs, proposed a load disaggregation method for low sampling data from smart meters based on a clustering algorithm and support vector regression optimization. This approach combines the k-median algorithm and dynamic time warping to identify the operating appliance and retrieves single energy consumption from an aggregate smart meter signal via optimized support vector regression (OSVR). Experiments showed that the technique can recognize multiple devices switching on at the same time using low-frequency data and achieve a high load disaggregation performance. The proposed method employs low sampling data acquired by smart meters without installing extra measurement equipment, which lowers hardware cost and is suitable for applications in smart grid environments. Full article
(This article belongs to the Section Smart System infrastructures and Cybersecurity)
Figures

Figure 1

Open AccessArticle Minimum Viable Products for Internet of Things Applications: Common Pitfalls and Practices
Future Internet 2019, 11(2), 50; https://doi.org/10.3390/fi11020050
Received: 28 January 2019 / Revised: 12 February 2019 / Accepted: 14 February 2019 / Published: 18 February 2019
Viewed by 398 | PDF Full-text (2001 KB) | HTML Full-text | XML Full-text
Abstract
Internet of Things applications are not only the new opportunity for digital businesses but also a major driving force for the modification and creation of software systems in all industries and businesses. Compared to other types of software-intensive products, the development of Internet [...] Read more.
Internet of Things applications are not only the new opportunity for digital businesses but also a major driving force for the modification and creation of software systems in all industries and businesses. Compared to other types of software-intensive products, the development of Internet of Things applications lacks a systematic approach and guidelines. This paper aims at understanding the common practices and challenges among start-up companies who are developing Internet of Things products. A qualitative research is conducted with data from twelve semi-structured interviews. A thematic analysis reveals common types of Minimum Viable Products, prototyping techniques and production concerns among early stage hardware start-ups. We found that hardware start-ups go through an incremental prototyping process toward production. The progress associates with the transition from speed-focus to quality-focus. Hardware start-ups heavily rely on third-party vendors in term of development speed and final product quality. We identified 24 challenges related to management, requirement, design, implementation and testing. Internet of Things entrepreneurs should be aware of relevant pitfalls and managing both internal and external risks. Full article
(This article belongs to the Section Internet of Things)
Figures

Figure 1

Open AccessArticle A Multi-Agent Architecture for Data Analysis
Future Internet 2019, 11(2), 49; https://doi.org/10.3390/fi11020049
Received: 20 December 2018 / Revised: 11 February 2019 / Accepted: 13 February 2019 / Published: 18 February 2019
Viewed by 406 | PDF Full-text (1652 KB) | HTML Full-text | XML Full-text
Abstract
ActoDatA (Actor Data Analysis) is an actor-based software library for the development of distributed data mining applications. It provides a multi-agent architecture with a set of predefined and configurable agents performing the typical tasks of data mining applications. In particular, its architecture can [...] Read more.
ActoDatA (Actor Data Analysis) is an actor-based software library for the development of distributed data mining applications. It provides a multi-agent architecture with a set of predefined and configurable agents performing the typical tasks of data mining applications. In particular, its architecture can manage different users’ applications; it maintains a high level of execution quality by distributing the agents of the applications on a dynamic set of computational nodes. Moreover, it provides reports about the analysis results and the collected data, which can be accessed through either a web browser or a dedicated mobile APP. After an introduction about the actor model and the software framework used for implementing the software library, this article underlines the main features of ActoDatA and presents its experimentation in some well-known data analysis domains. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle Vehicle Politeness in Driving Situations
Future Internet 2019, 11(2), 48; https://doi.org/10.3390/fi11020048
Received: 6 January 2019 / Revised: 12 February 2019 / Accepted: 13 February 2019 / Published: 16 February 2019
Viewed by 417 | PDF Full-text (1215 KB) | HTML Full-text | XML Full-text
Abstract
Future vehicles are becoming more like driving partners instead of mere machines. With the application of advanced information and communication technologies (ICTs), vehicles perform driving tasks while drivers monitor the functioning states of vehicles. This change in interaction requires a deliberate consideration of [...] Read more.
Future vehicles are becoming more like driving partners instead of mere machines. With the application of advanced information and communication technologies (ICTs), vehicles perform driving tasks while drivers monitor the functioning states of vehicles. This change in interaction requires a deliberate consideration of how vehicles should present driving-related information. As a way of encouraging drivers to more readily accept instructions from vehicles, we suggest the use of social rules, such as politeness, in human-vehicle interaction. In a 2 × 2 between-subjects experiment, we test the effects of vehicle politeness (plain vs. polite) on drivers’ interaction experiences in two operation situations (normal vs. failure). The results indicate that vehicle politeness improves interaction experience in normal working situations but impedes the experience in failure situations. Specifically, in normal situations, vehicles with polite instructions are highly evaluated for social presence, politeness, satisfaction and intention to use. Theoretical and practical implications on politeness research and speech interaction design are discussed. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle Joint Optimal Power Allocation and Relay Selection Scheme in Energy Harvesting Two-Way Relaying Network
Future Internet 2019, 11(2), 47; https://doi.org/10.3390/fi11020047
Received: 8 January 2019 / Revised: 29 January 2019 / Accepted: 14 February 2019 / Published: 15 February 2019
Viewed by 420 | PDF Full-text (1663 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a joint power allocation, time switching (TS) factor and relay selection scheme for an energy harvesting two-way relaying communication network (TWRN), where two transceivers exchange information with the help of a wireless-powered relay. By exploiting the TS architecture [...] Read more.
In this paper, we propose a joint power allocation, time switching (TS) factor and relay selection scheme for an energy harvesting two-way relaying communication network (TWRN), where two transceivers exchange information with the help of a wireless-powered relay. By exploiting the TS architecture at the relay node, the relay node needs to use additional time slots for energy transmission, reducing the transmission rate. Thus, we propose a joint resource allocation algorithm to maximize the max-min bidirectional instantaneous information rate. To solve the original non-convex optimization problem, the objective function is decomposed into three sub-problems and solved sequentially. The closed-form solution of the transmit power of two sources and the optimal TS factor can be obtained by the information rate balancing technology and the proposed time allocation scheme, respectively. At last, the optimal relay node can be obtained. Simulation results show that the performance of the proposed algorithm is better than the traditional schemes and power-splitting (PS) scheme. Full article
Figures

Figure 1

Open AccessArticle Efficient Tensor Sensing for RF Tomographic Imaging on GPUs
Future Internet 2019, 11(2), 46; https://doi.org/10.3390/fi11020046
Received: 4 January 2019 / Revised: 31 January 2019 / Accepted: 11 February 2019 / Published: 15 February 2019
Viewed by 399 | PDF Full-text (612 KB) | HTML Full-text | XML Full-text
Abstract
Radio-frequency (RF) tomographic imaging is a promising technique for inferring multi-dimensional physical space by processing RF signals traversed across a region of interest. Tensor-based approaches for tomographic imaging are superior at detecting the objects within higher dimensional spaces. The recently-proposed tensor sensing approach [...] Read more.
Radio-frequency (RF) tomographic imaging is a promising technique for inferring multi-dimensional physical space by processing RF signals traversed across a region of interest. Tensor-based approaches for tomographic imaging are superior at detecting the objects within higher dimensional spaces. The recently-proposed tensor sensing approach based on the transform tensor model achieves a lower error rate and faster speed than the previous tensor-based compress sensing approach. However, the running time of the tensor sensing approach increases exponentially with the dimension of tensors, thus not being very practical for big tensors. In this paper, we address this problem by exploiting massively-parallel GPUs. We design, implement, and optimize the tensor sensing approach on an NVIDIA Tesla GPU and evaluate the performance in terms of the running time and recovery error rate. Experimental results show that our GPU tensor sensing is as accurate as the CPU counterpart with an average of 44.79 × and up to 84.70 × speedups for varying-sized synthetic tensor data. For IKEA Model 3D model data of a smaller size, our GPU algorithm achieved 15.374× speedup over the CPU tensor sensing. We further encapsulate the GPU algorithm into an open-source library, called cuTensorSensing (CUDA Tensor Sensing), which can be used for efficient RF tomographic imaging. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Figures

Figure 1

Open AccessArticle Tooth-Marked Tongue Recognition Using Gradient-Weighted Class Activation Maps
Future Internet 2019, 11(2), 45; https://doi.org/10.3390/fi11020045
Received: 11 January 2019 / Revised: 9 February 2019 / Accepted: 13 February 2019 / Published: 15 February 2019
Viewed by 391 | PDF Full-text (6161 KB) | HTML Full-text | XML Full-text
Abstract
The tooth-marked tongue is an important indicator in traditional Chinese medicinal diagnosis. However, the clinical competence of tongue diagnosis is determined by the experience and knowledge of the practitioners. Due to the characteristics of different tongues, having many variations such as different colors [...] Read more.
The tooth-marked tongue is an important indicator in traditional Chinese medicinal diagnosis. However, the clinical competence of tongue diagnosis is determined by the experience and knowledge of the practitioners. Due to the characteristics of different tongues, having many variations such as different colors and shapes, tooth-marked tongue recognition is challenging. Most existing methods focus on partial concave features and use specific threshold values to classify the tooth-marked tongue. They lose the overall tongue information and lack the ability to be generalized and interpretable. In this paper, we try to solve these problems by proposing a visual explanation method which takes the entire tongue image as an input and uses a convolutional neural network to extract features (instead of setting a fixed threshold artificially) then classifies the tongue and produces a coarse localization map highlighting tooth-marked regions using Gradient-weighted Class Activation Mapping. Experimental results demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Innovative Topologies and Algorithms for Neural Networks)
Figures

Figure 1

Open AccessArticle BlackWatch: Increasing Attack Awareness within Web Applications
Future Internet 2019, 11(2), 44; https://doi.org/10.3390/fi11020044
Received: 15 January 2019 / Revised: 10 February 2019 / Accepted: 11 February 2019 / Published: 15 February 2019
Viewed by 560 | PDF Full-text (1421 KB) | HTML Full-text | XML Full-text
Abstract
Web applications are relied upon by many for the services they provide. It is essential that applications implement appropriate security measures to prevent security incidents. Currently, web applications focus resources towards the preventative side of security. While prevention is an essential part of [...] Read more.
Web applications are relied upon by many for the services they provide. It is essential that applications implement appropriate security measures to prevent security incidents. Currently, web applications focus resources towards the preventative side of security. While prevention is an essential part of the security process, developers must also implement a level of attack awareness into their web applications. Being able to detect when an attack is occurring provides applications with the ability to execute responses against malicious users in an attempt to slow down or deter their attacks. This research seeks to improve web application security by identifying malicious behavior from within the context of web applications using our tool BlackWatch. The tool is a Python-based application which analyzes suspicious events occurring within client web applications, with the objective of identifying malicious patterns of behavior. This approach avoids issues typically encountered with traditional web application firewalls. Based on the results from a preliminary study, BlackWatch was effective at detecting attacks from both authenticated and unauthenticated users. Furthermore, user tests with developers indicated BlackWatch was user-friendly, and was easy to integrate into existing applications. Future work seeks to develop the BlackWatch solution further for public release. Full article
(This article belongs to the Section Smart System infrastructures and Cybersecurity)
Figures

Figure 1

Open AccessReview Consistency Models of NoSQL Databases
Future Internet 2019, 11(2), 43; https://doi.org/10.3390/fi11020043
Received: 30 December 2018 / Revised: 2 February 2019 / Accepted: 11 February 2019 / Published: 14 February 2019
Viewed by 443 | PDF Full-text (2119 KB) | HTML Full-text | XML Full-text
Abstract
Internet has become so widespread that most popular websites are accessed by hundreds of millions of people on a daily basis. Monolithic architectures, which were frequently used in the past, were mostly composed of traditional relational database management systems, but quickly have become [...] Read more.
Internet has become so widespread that most popular websites are accessed by hundreds of millions of people on a daily basis. Monolithic architectures, which were frequently used in the past, were mostly composed of traditional relational database management systems, but quickly have become incapable of sustaining high data traffic very common these days. Meanwhile, NoSQL databases have emerged to provide some missing properties in relational databases like the schema-less design, horizontal scaling, and eventual consistency. This paper analyzes and compares the consistency model implementation on five popular NoSQL databases: Redis, Cassandra, MongoDB, Neo4j, and OrientDB. All of which offer at least eventual consistency, and some have the option of supporting strong consistency. However, imposing strong consistency will result in less availability when subject to network partition events. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle 3D-CNN-Based Fused Feature Maps with LSTM Applied to Action Recognition
Future Internet 2019, 11(2), 42; https://doi.org/10.3390/fi11020042
Received: 20 December 2018 / Revised: 6 February 2019 / Accepted: 6 February 2019 / Published: 13 February 2019
Viewed by 440 | PDF Full-text (3479 KB) | HTML Full-text | XML Full-text
Abstract
Human activity recognition is an active field of research in computer vision with numerous applications. Recently, deep convolutional networks and recurrent neural networks (RNN) have received increasing attention in multimedia studies, and have yielded state-of-the-art results. In this research work, we propose a [...] Read more.
Human activity recognition is an active field of research in computer vision with numerous applications. Recently, deep convolutional networks and recurrent neural networks (RNN) have received increasing attention in multimedia studies, and have yielded state-of-the-art results. In this research work, we propose a new framework which intelligently combines 3D-CNN and LSTM networks. First, we integrate discriminative information from a video into a map called a ‘motion map’ by using a deep 3-dimensional convolutional network (C3D). A motion map and the next video frame can be integrated into a new motion map, and this technique can be trained by increasing the training video length iteratively; then, the final acquired network can be used for generating the motion map of the whole video. Next, a linear weighted fusion scheme is used to fuse the network feature maps into spatio-temporal features. Finally, we use a Long-Short-Term-Memory (LSTM) encoder-decoder for final predictions. This method is simple to implement and retains discriminative and dynamic information. The improved results on benchmark public datasets prove the effectiveness and practicability of the proposed method. Full article
(This article belongs to the Special Issue Innovative Topologies and Algorithms for Neural Networks)
Figures

Figure 1

Open AccessArticle A Scheme to Design Community Detection Algorithms in Various Networks
Future Internet 2019, 11(2), 41; https://doi.org/10.3390/fi11020041
Received: 21 December 2018 / Revised: 31 January 2019 / Accepted: 10 February 2019 / Published: 12 February 2019
Viewed by 430 | PDF Full-text (423 KB) | HTML Full-text | XML Full-text
Abstract
Network structures, consisting of nodes and edges, have applications in almost all subjects. A set of nodes is called a community if the nodes have strong interrelations. Industries (including cell phone carriers and online social media companies) need community structures to allocate network [...] Read more.
Network structures, consisting of nodes and edges, have applications in almost all subjects. A set of nodes is called a community if the nodes have strong interrelations. Industries (including cell phone carriers and online social media companies) need community structures to allocate network resources and provide proper and accurate services. However, most detection algorithms are derived independently, which is arduous and even unnecessary. Although recent research shows that a general detection method that serves all purposes does not exist, we believe that there is some general procedure of deriving detection algorithms. In this paper, we represent such a general scheme. We mainly focus on two types of networks: transmission networks and similarity networks. We reduce them to a unified graph model, based on which we propose a method to define and detect community structures. Finally, we also give a demonstration to show how our design scheme works. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle Audio-Visual Genres and Polymediation in Successful Spanish YouTubers
Future Internet 2019, 11(2), 40; https://doi.org/10.3390/fi11020040
Received: 8 January 2019 / Revised: 1 February 2019 / Accepted: 2 February 2019 / Published: 11 February 2019
Viewed by 478 | PDF Full-text (11664 KB) | HTML Full-text | XML Full-text
Abstract
This paper is part of broader research entitled “Analysis of the YouTuber Phenomenon in Spain: An Exploration to Identify the Vectors of Change in the Audio-Visual Market”. My main objective was to determine the predominant audio-visual genres among the 10 most influential Spanish [...] Read more.
This paper is part of broader research entitled “Analysis of the YouTuber Phenomenon in Spain: An Exploration to Identify the Vectors of Change in the Audio-Visual Market”. My main objective was to determine the predominant audio-visual genres among the 10 most influential Spanish YouTubers in 2018. Using a quantitative extrapolation method, I extracted these data from SocialBlade, an independent website, whose main objective is to track YouTube statistics. Other secondary objectives in this research were to analyze: (1) Gender visualization, (2) the originality of these YouTube audio-visual genres with respect to others, and (3) to answer the question as to whether YouTube channels form a new audio-visual genre. I quantitatively analyzed these data to determine how these genres are influenced by the presence of polymediation as an integrated communicative environment working in relational terms with other media. My conclusion is that we can talk about a new audio-visual genre. When connected with polymediation, this may present an opportunity that has not yet been fully exploited by successful Spanish YouTubers. Full article
(This article belongs to the Special Issue Future Intelligent Systems and Networks 2019)
Figures

Figure 1

Open AccessArticle A Mathematical Model for Efficient and Fair Resource Assignment in Multipath Transport
Future Internet 2019, 11(2), 39; https://doi.org/10.3390/fi11020039
Received: 3 January 2019 / Revised: 28 January 2019 / Accepted: 5 February 2019 / Published: 10 February 2019
Viewed by 458 | PDF Full-text (870 KB) | HTML Full-text | XML Full-text
Abstract
Multipath transport protocols are aimed at increasing the throughput of data flows as well as maintaining fairness between users, which are both crucial factors to maximize user satisfaction. In this paper, a mixed (non)linear programming (MINLP) solution is developed which provides an optimum [...] Read more.
Multipath transport protocols are aimed at increasing the throughput of data flows as well as maintaining fairness between users, which are both crucial factors to maximize user satisfaction. In this paper, a mixed (non)linear programming (MINLP) solution is developed which provides an optimum solution to allocate link capacities in a network to a number of given traffic demands considering both the maximization of link utilization as well as fairness between transport layer data flows or subflows. The solutions of the MINLP formulation are evaluated w. r. t. their throughput and fairness using well-known metrics from the literature. It is shown that network flow fairness based capacity allocation achieves better fairness results than the bottleneck-based methods in most cases while yielding the same capacity allocation performance. Full article
Figures

Figure 1

Open AccessArticle Research on a Support System for Automatic Ship Navigation in Fairway
Future Internet 2019, 11(2), 38; https://doi.org/10.3390/fi11020038
Received: 28 December 2018 / Revised: 22 January 2019 / Accepted: 23 January 2019 / Published: 3 February 2019
Viewed by 604 | PDF Full-text (3544 KB) | HTML Full-text | XML Full-text
Abstract
In previous investigations, controllers for the track-keeping of ships were designed with the assumption of constant ship speed. However, when navigating in a fairway area, the ship’s speed is usually decreased to prepare for berthing. The existing track-keeping systems, which are applied when [...] Read more.
In previous investigations, controllers for the track-keeping of ships were designed with the assumption of constant ship speed. However, when navigating in a fairway area, the ship’s speed is usually decreased to prepare for berthing. The existing track-keeping systems, which are applied when the ship navigates in the open sea with a constant ship speed, cannot be used to navigate the ship in the fairway. In this article, a support system is proposed for ship navigation in the fairway. This system performs three tasks. First, the ship is automatically controlled by regulating the rudder to follow planned tracks. Second, the ship’s speed is reduced step by step to approach the berth area at a low speed. Finally, at low speed, when the ship’s rudder is not effective enough to control the ship’s heading to a desired angle, the ship’s heading is adjusted appropriately by the bow thruster before changing the control mode into the automatic berthing system. By the proposed system, the automatic systems can be combined to obtain a fully automatic system for ship control. To validate the effectiveness of this proposed system for automatic ship navigation in the fairway, numerical simulations were conducted with a training ship model. Full article
Figures

Figure 1

Open AccessArticle Autonomic Network Management and Cross-Layer Optimization in Software Defined Radio Environments
Future Internet 2019, 11(2), 37; https://doi.org/10.3390/fi11020037
Received: 21 December 2018 / Revised: 29 January 2019 / Accepted: 31 January 2019 / Published: 3 February 2019
Viewed by 604 | PDF Full-text (545 KB) | HTML Full-text | XML Full-text
Abstract
The demand for Autonomic Network Management (ANM) and optimization is as intense as ever, even though significant research has been devoted towards this direction. This paper addresses such need in Software Defined (SDR) based Cognitive Radio Networks (CRNs). We propose a new framework [...] Read more.
The demand for Autonomic Network Management (ANM) and optimization is as intense as ever, even though significant research has been devoted towards this direction. This paper addresses such need in Software Defined (SDR) based Cognitive Radio Networks (CRNs). We propose a new framework for ANM and network reconfiguration combining Software Defined Networks (SDN) with SDR via Network Function Virtualization (NFV) enabled Virtual Utility Functions (VUFs). This is the first approach combining ANM with SDR and SDN via NFV, demonstrating how these state-of-the-art technologies can be effectively combined to achieve reconfiguration flexibility, improved performance and efficient use of available resources. In order to show the feasibility of the proposed framework, we implemented its main functionalities in a cross-layer resource allocation mechanism for CRNs over real SDR testbeds provided by the Orchestration and Reconfiguration Control Architecture (ORCA) EU project. We demonstrate the efficacy of our framework, and based on the obtained results, we identify aspects that can be further investigated for improving the applicability and increasing performance of our broader framework. Full article
Figures

Figure 1

Open AccessFeature PaperReview Interoperability of the Time of Industry 4.0 and the Internet of Things
Future Internet 2019, 11(2), 36; https://doi.org/10.3390/fi11020036
Received: 31 December 2018 / Revised: 28 January 2019 / Accepted: 1 February 2019 / Published: 3 February 2019
Viewed by 734 | PDF Full-text (322 KB) | HTML Full-text | XML Full-text
Abstract
Industry 4.0 demands a dynamic optimization of production lines. They are formed by sets of heterogeneous devices that cooperate towards a shared goal. The Internet of Things can serve as a technology enabler for implementing such a vision. Nevertheless, the domain is struggling [...] Read more.
Industry 4.0 demands a dynamic optimization of production lines. They are formed by sets of heterogeneous devices that cooperate towards a shared goal. The Internet of Things can serve as a technology enabler for implementing such a vision. Nevertheless, the domain is struggling in finding a shared understanding of the concepts for describing a device. This aspect plays a fundamental role in enabling an “intelligent interoperability” among sensor and actuators that will constitute a dynamic Industry 4.0 production line. In this paper, we summarize the efforts of academics and practitioners toward describing devices in order to enable dynamic reconfiguration by machines or humans. We also propose a set of concepts for describing devices, and we analyze how present initiatives are covering these aspects. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Open AccessReview Percolation and Internet Science
Future Internet 2019, 11(2), 35; https://doi.org/10.3390/fi11020035
Received: 29 December 2018 / Revised: 27 January 2019 / Accepted: 29 January 2019 / Published: 2 February 2019
Viewed by 565 | PDF Full-text (4161 KB) | HTML Full-text | XML Full-text
Abstract
Percolation, in its most general interpretation, refers to the “flow” of something (a physical agent, data or information) in a network, possibly accompanied by some nonlinear dynamical processes on the network nodes (sometimes denoted reaction–diffusion systems, voter or opinion formation models, etc.). Originated [...] Read more.
Percolation, in its most general interpretation, refers to the “flow” of something (a physical agent, data or information) in a network, possibly accompanied by some nonlinear dynamical processes on the network nodes (sometimes denoted reaction–diffusion systems, voter or opinion formation models, etc.). Originated in the domain of theoretical and matter physics, it has many applications in epidemiology, sociology and, of course, computer and Internet sciences. In this review, we illustrate some aspects of percolation theory and its generalization, cellular automata and briefly discuss their relationship with equilibrium systems (Ising and Potts models). We present a model of opinion spreading, the role of the topology of the network to induce coherent oscillations and the influence (and advantages) of risk perception for stopping epidemics. The models and computational tools that are briefly presented here have an application to the filtering of tainted information in automatic trading. Finally, we introduce the open problem of controlling percolation and other processes on distributed systems. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle Fog vs. Cloud Computing: Should I Stay or Should I Go?
Future Internet 2019, 11(2), 34; https://doi.org/10.3390/fi11020034
Received: 1 December 2018 / Revised: 31 December 2018 / Accepted: 11 January 2019 / Published: 2 February 2019
Viewed by 585 | PDF Full-text (2318 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we work toward the answer to the question “is it worth processing a data stream on the device that collected it or should we send it somewhere else?”. As it is often the case in computer science, the response is [...] Read more.
In this article, we work toward the answer to the question “is it worth processing a data stream on the device that collected it or should we send it somewhere else?”. As it is often the case in computer science, the response is “it depends”. To find out the cases where it is more profitable to stay in the device (which is part of the fog) or to go to a different one (for example, a device in the cloud), we propose two models that intend to help the user evaluate the cost of performing a certain computation on the fog or sending all the data to be handled by the cloud. In our generic mathematical model, the user can define a cost type (e.g., number of instructions, execution time, energy consumption) and plug in values to analyze test cases. As filters have a very important role in the future of the Internet of Things and can be implemented as lightweight programs capable of running on resource-constrained devices, this kind of procedure is the main focus of our study. Furthermore, our visual model guides the user in their decision by aiding the visualization of the proposed linear equations and their slope, which allows them to find if either fog or cloud computing is more profitable for their specific scenario. We validated our models by analyzing four benchmark instances (two applications using two different sets of parameters each) being executed on five datasets. We use execution time and energy consumption as the cost types for this investigation. Full article
(This article belongs to the Special Issue Selelcted papers from INTESA Workshop 2018)
Figures

Figure 1

Open AccessArticle Contribution of the Web of Things and of the Opportunistic Computing to the Smart Agriculture: A Practical Experiment
Future Internet 2019, 11(2), 33; https://doi.org/10.3390/fi11020033
Received: 31 December 2018 / Revised: 28 January 2019 / Accepted: 29 January 2019 / Published: 1 February 2019
Viewed by 558 | PDF Full-text (6742 KB) | HTML Full-text | XML Full-text
Abstract
With the emergence of the Internet of Things, environmental sensing has been gaining interest, promising to improve agricultural practices by facilitating decision-making based on gathered environmental data (i.e., weather forecasting, crop monitoring, and soil moisture sensing). Environmental sensing, and by extension what is [...] Read more.
With the emergence of the Internet of Things, environmental sensing has been gaining interest, promising to improve agricultural practices by facilitating decision-making based on gathered environmental data (i.e., weather forecasting, crop monitoring, and soil moisture sensing). Environmental sensing, and by extension what is referred to as precision or smart agriculture, pose new challenges, especially regarding the collection of environmental data in the presence of connectivity disruptions, their gathering, and their exploitation by end-users or by systems that must perform actions according to the values of those collected data. In this paper, we present a middleware platform for the Internet of Things that implements disruption tolerant opportunistic networking and computing techniques, and that makes it possible to expose and manage physical objects through Web-based protocols, standards and technologies, thus providing interoperability between objects and creating a Web of Things (WoT). This WoT-based opportunistic computing approach is backed up by a practical experiment whose outcomes are presented in this article. Full article
(This article belongs to the Special Issue WSN and IoT in Smart Agriculture)
Figures

Figure 1

Open AccessFeature PaperArticle Important Factors for Improving Google Search Rank
Future Internet 2019, 11(2), 32; https://doi.org/10.3390/fi11020032
Received: 11 December 2018 / Revised: 18 January 2019 / Accepted: 18 January 2019 / Published: 30 January 2019
Viewed by 736 | PDF Full-text (731 KB) | HTML Full-text | XML Full-text
Abstract
The World Wide Web has become an essential modern tool for people’s daily routine. The fact that it is a convenient means for communication and information search has made it extremely popular. This fact led companies to start using online advertising by creating [...] Read more.
The World Wide Web has become an essential modern tool for people’s daily routine. The fact that it is a convenient means for communication and information search has made it extremely popular. This fact led companies to start using online advertising by creating corporate websites. With the rapid increase in the number of websites, search engines had to come up with a solution of algorithms and programs to qualify the results of a search and provide the users with relevant content to their search. On the other side, developers, in pursuit of the highest rankings in the search engine result pages (SERPs), began to study and observe how search engines work and which factors contribute to higher rankings. The knowledge that has been extracted constituted the base for the creation of the profession of Search Engine Optimization (SEO). This paper consists of two parts. The first part aims to perform a literature review of the factors that affect the ranking of websites in the SERPs and to highlight the top factors that contribute to better ranking. To achieve this goal, a collection and analysis of academic papers was conducted. According to our research, 24 website characteristics came up as factors affecting any website’s ranking, with the most references mentioning quality and quantity of backlinks, social media support, keyword in title tag, website structure, website size, loading time, domain age, and keyword density. The second part consists of our research which was conducted manually using the phrases “hotel Athens”, “email marketing”, and “casual shoes”. For each one of these keywords, the first 15 Google results were examined considering the factors found in the literature review. For the measurement of the significance of each factor, the Spearman correlation was calculated and every factor was compared with the ranking of the results individually. The findings of the research showed us that the top factors that contribute to higher rankings are the existence of website SSL certificate as well as keyword in URL, the quantity of backlinks pointing to a website, the text length, and the domain age, which is not perfectly aligned with what the literature review showed us. Full article
(This article belongs to the Special Issue Search Engine Optimization)
Figures

Figure 1

Open AccessArticle Dual-Band Monopole Antenna for RFID Applications
Future Internet 2019, 11(2), 31; https://doi.org/10.3390/fi11020031
Received: 30 December 2018 / Revised: 24 January 2019 / Accepted: 26 January 2019 / Published: 30 January 2019
Viewed by 754 | PDF Full-text (2912 KB) | HTML Full-text | XML Full-text
Abstract
Over the past decade, radio-frequency identification (RFID) technology has attracted significant attention and become very popular in different applications, such as identification, management, and monitoring. In this study, a dual-band microstrip-fed monopole antenna has been introduced for RFID applications. The antenna is designed [...] Read more.
Over the past decade, radio-frequency identification (RFID) technology has attracted significant attention and become very popular in different applications, such as identification, management, and monitoring. In this study, a dual-band microstrip-fed monopole antenna has been introduced for RFID applications. The antenna is designed to work at the frequency ranges of 2.2–2.6 GHz and 5.3–6.8 GHz, covering 2.4/5.8 GHz RFID operation bands. The antenna structure is like a modified F-shaped radiator. It is printed on an FR-4 dielectric with an overall size of 38 × 45 × 1.6 mm3. Fundamental characteristics of the antenna in terms of return loss, Smith Chart, phase, radiation pattern, and antenna gain are investigated and good results are obtained. Simulations have been carried out using computer simulation technology (CST) software. A prototype of the antenna was fabricated and its characteristics were measured. The measured results show good agreement with simulations. The structure of the antenna is planar, simple to design and fabricate, easy to integrate with RF circuit, and suitable for use in RFID systems. Full article
Figures

Figure 1

Open AccessArticle An Investigation into Healthcare-Data Patterns
Future Internet 2019, 11(2), 30; https://doi.org/10.3390/fi11020030
Received: 4 December 2018 / Revised: 16 January 2019 / Accepted: 20 January 2019 / Published: 30 January 2019
Viewed by 586 | PDF Full-text (5134 KB) | HTML Full-text | XML Full-text
Abstract
Visualising complex data facilitates a more comprehensive stage for conveying knowledge. Within the medical data domain, there is an increasing requirement for valuable and accurate information. Patients need to be confident that their data is being stored safely and securely. As such, it [...] Read more.
Visualising complex data facilitates a more comprehensive stage for conveying knowledge. Within the medical data domain, there is an increasing requirement for valuable and accurate information. Patients need to be confident that their data is being stored safely and securely. As such, it is now becoming necessary to visualise data patterns and trends in real-time to identify erratic and anomalous network access behaviours. In this paper, an investigation into modelling data flow within healthcare infrastructures is presented; where a dataset from a Liverpool-based (UK) hospital is employed for the case study. Specifically, a visualisation of transmission control protocol (TCP) socket connections is put forward, as an investigation into the data complexity and user interaction events within healthcare networks. In addition, a filtering algorithm is proposed for noise reduction in the TCP dataset. Positive results from using this algorithm are apparent on visual inspection, where noise is reduced by up to 89.84%. Full article
(This article belongs to the Special Issue Smart Systems for Healthcare)
Figures

Figure 1

Open AccessArticle My Smartphone tattles: Considering Popularity of Messages in Opportunistic Data Dissemination
Future Internet 2019, 11(2), 29; https://doi.org/10.3390/fi11020029
Received: 21 December 2018 / Revised: 17 January 2019 / Accepted: 22 January 2019 / Published: 29 January 2019
Viewed by 541 | PDF Full-text (3244 KB) | HTML Full-text | XML Full-text
Abstract
Opportunistic networks have recently seen increasing interest in the networking community. They can serve a range of application scenarios, most of them being destination-less, i.e., without a-priori knowledge of who is the final destination of a message. In this paper, we explore the [...] Read more.
Opportunistic networks have recently seen increasing interest in the networking community. They can serve a range of application scenarios, most of them being destination-less, i.e., without a-priori knowledge of who is the final destination of a message. In this paper, we explore the usage of data popularity for improving the efficiency of data forwarding in opportunistic networks. Whether a message will become popular or not is not known before disseminating it to users. Thus, popularity needs to be estimated in a distributed manner considering a local context. We propose Keetchi, a data forwarding protocol based on Q-Learning to give more preference to popular data rather than less popular data. Our extensive simulation comparison between Keetchi and the well known Epidemic protocol shows that the network overhead of data forwarding can be significantly reduced while keeping the delivery rate the same. Full article
(This article belongs to the Special Issue Opportunistic Networks in Urban Environment)
Figures

Figure 1

Open AccessArticle T-Move: A Light-Weight Protocol for Improved QoS in Content-Centric Networks with Producer Mobility
Future Internet 2019, 11(2), 28; https://doi.org/10.3390/fi11020028
Received: 21 November 2018 / Revised: 21 January 2019 / Accepted: 24 January 2019 / Published: 27 January 2019
Viewed by 649 | PDF Full-text (834 KB) | HTML Full-text | XML Full-text
Abstract
Recent interest in applications where content is of primary interest has triggered the exploration of a variety of protocols and algorithms. For such networks that are information-centric, architectures such as the Content-Centric Networking have been proven to result in good network performance. However, [...] Read more.
Recent interest in applications where content is of primary interest has triggered the exploration of a variety of protocols and algorithms. For such networks that are information-centric, architectures such as the Content-Centric Networking have been proven to result in good network performance. However, such architectures are still evolving to cater for application-specific requirements. This paper proposes T-Move, a light-weight solution for producer mobility and caching at the edge that is especially suitable for content-centric networks with mobile content producers. T-Move introduces a novel concept called trendiness of data for Content-Centric Networking (CCN)/Named Data Networking (NDN)-based networks. It enhances network performance and quality of service (QoS) using two strategies—cache replacement and proactive content-pushing for handling producer mobility—both based on trendiness. It uses simple operations and smaller control message overhead and is suitable for networks where the response needs to be quick. Simulation results using ndnSIM show reduced traffic, content retrieval time, and increased cache hit ratio with T-Move, when compared to MAP-Me and plain NDN for networks of different sizes and mobility rates. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Figures

Figure 1

Open AccessArticle An Overview of Vehicular Communications
Future Internet 2019, 11(2), 27; https://doi.org/10.3390/fi11020027
Received: 29 December 2018 / Revised: 17 January 2019 / Accepted: 22 January 2019 / Published: 24 January 2019
Viewed by 779 | PDF Full-text (1468 KB) | HTML Full-text | XML Full-text
Abstract
The transport sector is commonly subordinate to several issues, such as traffic congestion and accidents. Despite this, in recent years, it is also evolving with regard to cooperation between vehicles. The fundamental objective of this trend is to increase road safety, attempting to [...] Read more.
The transport sector is commonly subordinate to several issues, such as traffic congestion and accidents. Despite this, in recent years, it is also evolving with regard to cooperation between vehicles. The fundamental objective of this trend is to increase road safety, attempting to anticipate the circumstances of potential danger. Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I) and Vehicle-to-Everything (V2X) technologies strive to give communication models that can be employed by vehicles in different application contexts. The resulting infrastructure is an ad-hoc mesh network whose nodes are not only vehicles but also all mobile devices equipped with wireless modules. The interaction between the multiple connected entities consists of information exchange through the adoption of suitable communication protocols. The main aim of the review carried out in this paper is to examine and assess the most relevant systems, applications, and communication protocols that will distinguish the future road infrastructures used by vehicles. The results of the investigation reveal the real benefits that technological cooperation can involve in road safety. Full article
(This article belongs to the Special Issue 10th Anniversary Feature Papers)
Figures

Figure 1

Open AccessArticle A Spatial Prediction-Based Motion-Compensated Frame Rate Up-Conversion
Future Internet 2019, 11(2), 26; https://doi.org/10.3390/fi11020026
Received: 30 December 2018 / Revised: 13 January 2019 / Accepted: 21 January 2019 / Published: 23 January 2019
Viewed by 614 | PDF Full-text (1884 KB) | HTML Full-text | XML Full-text
Abstract
In Multimedia Internet of Things (IoT), in order to reduce the bandwidth consumption of wireless channels, Motion-Compensated Frame Rate Up-Conversion (MC-FRUC) is often used to support the low-bitrate video communication. In this paper, we propose a spatial predictive algorithm which is used to [...] Read more.
In Multimedia Internet of Things (IoT), in order to reduce the bandwidth consumption of wireless channels, Motion-Compensated Frame Rate Up-Conversion (MC-FRUC) is often used to support the low-bitrate video communication. In this paper, we propose a spatial predictive algorithm which is used to improve the performance of MC-FRUC. The core of the proposed algorithm is a predictive model to split a frame into two kinds of blocks: basic blocks and absent blocks. Then an improved bilateral motion estimation is proposed to compute the Motion Vectors (MVs) of basic blocks. Finally, with the spatial correlation of Motion Vector Field (MVF), the MV of an absent block is predicted based on the MVs of its neighboring basic blocks. Experimental results show that the proposed spatial prediction algorithm can improve both the objective and the subjective quality of the interpolated frame, with a low computational complexity. Full article
(This article belongs to the Special Issue Multimedia Internet of Things (IoT) in Smart Environment)
Figures

Figure 1

Future Internet EISSN 1999-5903 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top