Next Issue
Volume 14, March
Previous Issue
Volume 14, January

Future Internet, Volume 14, Issue 2 (February 2022) – 38 articles

Cover Story (view full-size image):

This work presents a distance-based agglomerative clustering algorithm for inferential monitoring based on end-to-end measurements obtained at the network edge. In particular, we extend the bottom-up clustering method by incorporating the use of nearest neighbors (NN) chains and reciprocal nearest neighbors (RNNs) to infer the topology of the examined network (i.e., the logical and the physical routing trees) and estimate the link performance characteristics (i.e., loss rate and jitter). Going beyond network monitoring in itself, we design and implement a tangible application of the proposed algorithm that combines network tomography with change point analysis to realize performance anomaly detection. The experimental validation of our ideas takes place in a fully controlled large-scale testbed over bare-metal hardware. View this paper.

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Review
CNN for User Activity Detection Using Encrypted In-App Mobile Data
Future Internet 2022, 14(2), 67; https://doi.org/10.3390/fi14020067 - 21 Feb 2022
Viewed by 833
Abstract
In this study, a simple yet effective framework is proposed to characterize fine-grained in-app user activities performed on mobile applications using a convolutional neural network (CNN). The proposed framework uses a time window-based approach to split the activity’s encrypted traffic flow into segments, [...] Read more.
In this study, a simple yet effective framework is proposed to characterize fine-grained in-app user activities performed on mobile applications using a convolutional neural network (CNN). The proposed framework uses a time window-based approach to split the activity’s encrypted traffic flow into segments, so that in-app activities can be identified just by observing only a part of the activity-related encrypted traffic. In this study, matrices were constructed for each encrypted traffic flow segment. These matrices acted as input into the CNN model, allowing it to learn to differentiate previously trained (known) and previously untrained (unknown) in-app activities as well as the known in-app activity type. The proposed method extracts and selects salient features for encrypted traffic classification. This is the first-known approach proposing to filter unknown traffic with an average accuracy of 88%. Once the unknown traffic is filtered, the classification accuracy of our model would be 92%. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT)
Show Figures

Figure 1

Article
Personalizing Environmental Awareness through Smartphones Using AHP and PROMETHEE II
Future Internet 2022, 14(2), 66; https://doi.org/10.3390/fi14020066 - 21 Feb 2022
Viewed by 731
Abstract
Environmental awareness refers to the understanding of the importance of protecting the natural environment. Digital technologies can play an important role in raising awareness of environmental issues. In view of this compelling need, this paper presents a novel way to promote environmental awareness [...] Read more.
Environmental awareness refers to the understanding of the importance of protecting the natural environment. Digital technologies can play an important role in raising awareness of environmental issues. In view of this compelling need, this paper presents a novel way to promote environmental awareness with the use of smartphones. To achieve this, it employs personalization techniques, and specifically the Analytic Hierarchy Process (AHP) and PROMETHEE II. In more detail, the mobile application incorporates a user model that holds information, such as location (city, mountain, sea, etc.), age, interests, needs and indicators of waste management, economy of natural resources, general environmental protection, and biodiversity. At the first interaction of the user with the application, the user model is initialized; then, the system uses AHP and PROMETHEE II to provide personalized advice to users in order to help them raise their environmental awareness. The criteria, used to evaluate environmental advice, include the current location, living environment, habits, interests, needs, age, and seasonal suitability of the user. The novelty of this paper is the combination of AHP and PROMETHEE II for personalizing the environmental awareness using mobile technologies, taking into consideration the user profile as well as the surrounding area where the user is at the time that the advice is provided. The presented application has been evaluated regarding the system usefulness and environmental awareness. The findings indicate the high acceptance of this approach and its positive impact on users’ attitude and behavior with regard to reducing their environmental footprint. Full article
(This article belongs to the Special Issue Advances and Perspectives in Human-Computer Interaction)
Show Figures

Figure 1

Article
Flow Scheduling in Data Center Networks with Time and Energy Constraints: A Software-Defined Network Approach
Future Internet 2022, 14(2), 65; https://doi.org/10.3390/fi14020065 - 21 Feb 2022
Viewed by 606
Abstract
Flow scheduling in Data Center Networks (DCN) is a hot topic as cloud computing and virtualization are becoming the dominant paradigm in the increasing demand of digital services. Within the cost of the DCN, the energy demands associated with the network infrastructure represent [...] Read more.
Flow scheduling in Data Center Networks (DCN) is a hot topic as cloud computing and virtualization are becoming the dominant paradigm in the increasing demand of digital services. Within the cost of the DCN, the energy demands associated with the network infrastructure represent an important portion. When flows have temporal restrictions, the scheduling with path selection to reduce the number of active switching devices is a NP-hard problem as proven in the literature. In this paper, an heuristic approach to schedule real-time flows in data-centers is proposed, optimizing the temporal requirements while reducing the energy consumption in the network infrastructure via a proper selection of the paths. The experiments show good performance of the solutions found in relation to exact solution approximations based on an integer linear programming model. The possibility of programming the network switches allows the dynamic schedule of paths of flows under the software-defined network management. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

Review
Digital Twin—Cyber Replica of Physical Things: Architecture, Applications and Future Research Directions
Future Internet 2022, 14(2), 64; https://doi.org/10.3390/fi14020064 - 21 Feb 2022
Viewed by 1322
Abstract
The Internet of Things (IoT) connects massive smart devices to collect big data and carry out the monitoring and control of numerous things in cyber-physical systems (CPS). By leveraging machine learning (ML) and deep learning (DL) techniques to analyze the collected data, physical [...] Read more.
The Internet of Things (IoT) connects massive smart devices to collect big data and carry out the monitoring and control of numerous things in cyber-physical systems (CPS). By leveraging machine learning (ML) and deep learning (DL) techniques to analyze the collected data, physical systems can be monitored and controlled effectively. Along with the development of IoT and data analysis technologies, a number of CPS (smart grid, smart transportation, smart manufacturing, smart cities, etc.) adopt IoT and data analysis technologies to improve their performance and operations. Nonetheless, directly manipulating or updating the real system has inherent risks. Thus, creating a digital clone of a real physical system, denoted as a Digital Twin (DT), is a viable strategy. Generally speaking, a DT is a data-driven software and hardware emulation platform, which is a cyber replica of physical systems. Meanwhile, a DT describes a specific physical system and tends to achieve the functions and use cases of physical systems. Since DT is a complex digital system, finding a way to effectively represent a variety of things in timely and efficient manner poses numerous challenges to the networking, computing, and data analytics for IoT. Furthermore, the design of a DT for IoT systems must consider numerous exceptional requirements (e.g., latency, reliability, safety, scalability, security, and privacy). To address such challenges, the thoughtful design of DTs offers opportunities for novel and interdisciplinary research efforts. To address the aforementioned problems and issues, in this paper, we first review the architectures of DTs, data representation, and communication protocols. We then review existing efforts on applying DT into IoT data-driven smart systems, including the smart grid, smart transportation, smart manufacturing, and smart cities. Further, we summarize the existing challenges from CPS, data science, optimization, and security and privacy perspectives. Finally, we outline possible future research directions from the perspectives of performance, new DT-driven services, model and learning, and security and privacy. Full article
(This article belongs to the Special Issue Towards Convergence of Internet of Things and Cyber-Physical Systems)
Show Figures

Graphical abstract

Article
Exploring the Benefits of Combining DevOps and Agile
Future Internet 2022, 14(2), 63; https://doi.org/10.3390/fi14020063 - 19 Feb 2022
Cited by 1 | Viewed by 1000
Abstract
The combined adoption of Agile and DevOps enables organizations to cope with the increasing complexity of managing customer requirements and requests. It fosters the emergence of a more collaborative and Agile framework to replace the waterfall models applied to software development flow and [...] Read more.
The combined adoption of Agile and DevOps enables organizations to cope with the increasing complexity of managing customer requirements and requests. It fosters the emergence of a more collaborative and Agile framework to replace the waterfall models applied to software development flow and the separation of development teams from operations. This study aims to explore the benefits of the combined adoption of both models. A qualitative methodology is adopted by including twelve case studies from international software engineering companies. Thematic analysis is employed in identifying the benefits of the combined adoption of both paradigms. The findings reveal the existence of twelve benefits, highlighting the automation of processes, improved communication between teams, and reduction in time to market through process integration and shorter software delivery cycles. Although they address different goals and challenges, the Agile and DevOps paradigms when properly combined and aligned can offer relevant benefits to organizations. The novelty of this study lies in the systematization of the benefits of the combined adoption of Agile and DevOps considering multiple perspectives of the software engineering business environment. Full article
(This article belongs to the Special Issue Software Engineering and Data Science)
Show Figures

Graphical abstract

Article
A Performance Comparison of Different Cloud-Based Natural Language Understanding Services for an Italian e-Learning Platform
Future Internet 2022, 14(2), 62; https://doi.org/10.3390/fi14020062 - 18 Feb 2022
Viewed by 776
Abstract
During the COVID-19 pandemic, the corporate online training sector has increased exponentially and online course providers had to implement innovative solutions to be more efficient and provide a satisfactory service. This paper considers a real case study in implementing a chatbot, which answers [...] Read more.
During the COVID-19 pandemic, the corporate online training sector has increased exponentially and online course providers had to implement innovative solutions to be more efficient and provide a satisfactory service. This paper considers a real case study in implementing a chatbot, which answers frequently asked questions from learners on an Italian e-learning platform that provides workplace safety courses to several business customers. Having to respond quickly to the increase in the courses activated, the company decided to develop a chatbot using a cloud-based service currently available on the market. These services are based on Natural Language Understanding (NLU) engines, which deal with identifying information such as entities and intentions from the sentences provided as input. To integrate a chatbot in an e-learning platform, we studied the performance of the intent recognition task of the major NLU platforms available on the market with an in-depth comparison, using an Italian dataset provided by the owner of the e-learning platform. We focused on intent recognition, carried out several experiments and evaluated performance in terms of F-score, error rate, response time, and robustness of all the services selected. The chatbot is currently in production, therefore we present a description of the system implemented and its results on the original users’ requests. Full article
(This article belongs to the Special Issue Technology Enhanced Learning and Mobile Learning)
Show Figures

Figure 1

Article
IoT Nodes Authentication and ID Spoofing Detection Based on Joint Use of Physical Layer Security and Machine Learning
Future Internet 2022, 14(2), 61; https://doi.org/10.3390/fi14020061 - 17 Feb 2022
Viewed by 698
Abstract
The wide variety of services and applications that shall be supported by future wireless systems will lead to a high amount of sensitive data exchanged via radio, thus introducing a significant challenge for security. Moreover, in new networking paradigms, such as the Internet [...] Read more.
The wide variety of services and applications that shall be supported by future wireless systems will lead to a high amount of sensitive data exchanged via radio, thus introducing a significant challenge for security. Moreover, in new networking paradigms, such as the Internet of Things, traditional methods of security may be difficult to implement due to the radical change of requirements and constraints. In such contexts, physical layer security is a promising additional means to realize communication security with low complexity. In particular, this paper focuses on node authentication and spoofing detection in an actual wireless sensor network (WSN), where multiple nodes communicate with a sink node. Nodes are in fixed positions, but the communication channels varies due to the scatterers’ movement. In the proposed security framework, the sink node is able to perform a continuous authentication of nodes during communication based on wireless fingerprinting. In particular, a machine learning approach is used for authorized nodes classification by means of the identification of specific attributes of their wireless channel. Then classification results are compared with the node ID in order to detect if the message has been generated by a node other than its claimed source. Finally, in order to increase the spoofing detection performance in small networks, the use of low-complexity sentinel nodes is proposed here. Results show the good performance of the proposed method that is suitable for actual implementation in a WSN. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks)
Show Figures

Figure 1

Article
Safety Verification of Driving Resource Occupancy Rules Based on Functional Language
Future Internet 2022, 14(2), 60; https://doi.org/10.3390/fi14020060 - 17 Feb 2022
Viewed by 595
Abstract
Autonomous driving is a safety-critical system, and the occupancy of its environmental resources affects the safety of autonomous driving. In view of the lack of safety verification of environmental resource occupation rules in autonomous driving, this paper proposes a verification method of automatic [...] Read more.
Autonomous driving is a safety-critical system, and the occupancy of its environmental resources affects the safety of autonomous driving. In view of the lack of safety verification of environmental resource occupation rules in autonomous driving, this paper proposes a verification method of automatic driving model based on functional language through CSPM. Firstly, the modeling and verification framework of an autopilot model based on CSPM is given. Secondly, the process algebra definition of CSPM is given. Thirdly, the typical single loop environment model in automatic driving is abstracted, and the mapping method from automatic driving model to CSP is described in detail for the automatic driving environment and the typical collision, overtaking, lane change and other scenes involved. Finally, the autopilot model of the single loop is mapped to CSPM, and the application effect of this method is discussed by using FDR tool. Experiments show that this method can verify the safety of autonomous driving resources, thereby improving the reliability of the autonomous driving model. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

Review
Network Function Virtualization and Service Function Chaining Frameworks: A Comprehensive Review of Requirements, Objectives, Implementations, and Open Research Challenges
Future Internet 2022, 14(2), 59; https://doi.org/10.3390/fi14020059 - 15 Feb 2022
Viewed by 1196
Abstract
Network slicing has become a fundamental property for next-generation networks, especially because an inherent part of 5G standardisation is the ability for service providers to migrate some or all of their network services to a virtual network infrastructure, thereby reducing both capital and [...] Read more.
Network slicing has become a fundamental property for next-generation networks, especially because an inherent part of 5G standardisation is the ability for service providers to migrate some or all of their network services to a virtual network infrastructure, thereby reducing both capital and operational costs. With network function virtualisation (NFV), network functions (NFs) such as firewalls, traffic load balancers, content filters, and intrusion detection systems (IDS) are either instantiated on virtual machines (VMs) or lightweight containers, often chained together to create a service function chain (SFC). In this work, we review the state-of-the-art NFV and SFC implementation frameworks and present a taxonomy of the current proposals. Our taxonomy comprises three major categories based on the primary objectives of each of the surveyed frameworks: (1) resource allocation and service orchestration, (2) performance tuning, and (3) resilience and fault recovery. We also identify some key open research challenges that require further exploration by the research community to achieve scalable, resilient, and high-performance NFV/SFC deployments in next-generation networks. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Graphical abstract

Article
Flow-Based Programming for Machine Learning
Future Internet 2022, 14(2), 58; https://doi.org/10.3390/fi14020058 - 15 Feb 2022
Viewed by 731
Abstract
Machine Learning (ML) has gained prominence and has tremendous applications in fields like medicine, biology, geography and astrophysics, to name a few. Arguably, in such areas, it is used by domain experts, who are not necessarily skilled-programmers. Thus, it presents a steep learning [...] Read more.
Machine Learning (ML) has gained prominence and has tremendous applications in fields like medicine, biology, geography and astrophysics, to name a few. Arguably, in such areas, it is used by domain experts, who are not necessarily skilled-programmers. Thus, it presents a steep learning curve for such domain experts in programming ML applications. To overcome this and foster widespread adoption of ML techniques, we propose to equip them with domain-specific graphical tools. Such tools, based on the principles of flow-based programming paradigm, would support the graphical composition of ML applications at a higher level of abstraction and auto-generation of target code. Accordingly, (i) we have modelled ML algorithms as composable components; (ii) described an approach to parse a flow created by connecting several such composable components and use an API-based code generation technique to generate the ML application. To demonstrate the feasibility of our conceptual approach, we have modelled the APIs of Apache Spark ML as composable components and validated it in three use-cases. The use-cases are designed to capture the ease of program specification at a higher abstraction level, easy parametrisation of ML APIs, auto-generation of the ML application and auto-validation of the generated model for better prediction accuracy. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

Article
Open-Source MQTT-Based End-to-End IoT System for Smart City Scenarios
Future Internet 2022, 14(2), 57; https://doi.org/10.3390/fi14020057 - 15 Feb 2022
Cited by 1 | Viewed by 1057
Abstract
Many innovative services are emerging based on the Internet of Things (IoT) technology, aiming at fostering better sustainability of our cities. New solutions integrating Information and Communications Technologies (ICTs) with sustainable transport media are encouraged by several public administrations in the so-called Smart [...] Read more.
Many innovative services are emerging based on the Internet of Things (IoT) technology, aiming at fostering better sustainability of our cities. New solutions integrating Information and Communications Technologies (ICTs) with sustainable transport media are encouraged by several public administrations in the so-called Smart City scenario, where heterogeneous users in city roads call for safer mobility. Among several possible applications, recently, there has been a lot of attention on the so-called Vulnerable Road Users (VRUs), such as pedestrians or bikers. They can be equipped with wearable sensors that are able to communicate their data through a chain of devices towards the cloud for agile and effective control of their mobility. This work describes a complete end-to-end IoT system implemented through the integration of different complementary technologies, whose main purpose is to monitor the information related to road users generated by wearable sensors. The system has been implemented using an ESP32 micro-controller connected to the sensors and communicating through a Bluetooth Low Energy (BLE) interface with an Android device, which is assumed to always be carried by any road user. Based on this, we use it as a gateway node, acting as a real-time asynchronous publisher of a Message Queue Telemetry Transport (MQTT) protocol chain. The MQTT broker is configured on a Raspberry PI device and collects sensor data to be sent to a web-based control panel that performs data monitoring and processing. All the architecture modules have been implemented through open-source technologies. The analysis of the BLE packet exchange has been carried out by resorting to the Wireshark packet analyzer. In addition, a feasibility analysis has been carried out by showing the capability of the proposed solution to show the values gathered through the sensors on a remote dashboard. The developed system is publicly available to allow the possible integration of other modules for additional Smart City services or extension to further ICT applications. Full article
(This article belongs to the Special Issue Mobility and Cyber-Physical Intelligence)
Show Figures

Graphical abstract

Article
Improved Eagle Strategy Algorithm for Dynamic Web Service Composition in the IoT: A Conceptual Approach
Future Internet 2022, 14(2), 56; https://doi.org/10.3390/fi14020056 - 15 Feb 2022
Viewed by 727
Abstract
The Internet of Things (IoT) is now expanding and becoming more popular in most industries, which leads to vast growth in cloud computing. The architecture of IoT is integrated with cloud computing through web services. Recently, Dynamic Web Service Composition (DWSC) has been [...] Read more.
The Internet of Things (IoT) is now expanding and becoming more popular in most industries, which leads to vast growth in cloud computing. The architecture of IoT is integrated with cloud computing through web services. Recently, Dynamic Web Service Composition (DWSC) has been implemented to fulfill the IoT and business processes. In recent years, the number of cloud services has multiplied, resulting in cloud services providing similar services with similar functionality but varying in Quality of Services (QoS), for instance, on the response time of web services; however, existing methods are insufficient in solving large-scale repository issues. Bio-inspired algorithm methods have shown better performance in solving the large-scale service composition problems, unlike deterministic algorithms, which are restricted. Thus, an improved eagle strategy algorithm method is proposed to increase the performance that directly indicates an improvement in computation time in large-scale DWSC in a cloud-based platform and on both functional and non-functional attributes of services. By proposing the improved bio-inspired method, the computation time can be improved, especially in a large-scale repository of IoT. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Review
Securing IoT Devices against Differential-Linear (DL) Attack Used on Serpent Algorithm
Future Internet 2022, 14(2), 55; https://doi.org/10.3390/fi14020055 - 13 Feb 2022
Viewed by 703
Abstract
Cryptographic algorithms installed on Internet of Things (IoT) devices suffer many attacks. Some of these attacks include the differential linear attack (DL). The DL attack depends on the computation of the probability of differential-linear characteristics, which yields a Differential-Linear Connectivity Table (DLCT [...] Read more.
Cryptographic algorithms installed on Internet of Things (IoT) devices suffer many attacks. Some of these attacks include the differential linear attack (DL). The DL attack depends on the computation of the probability of differential-linear characteristics, which yields a Differential-Linear Connectivity Table (DLCT). The DLCT is a probability table that provides an attacker many possibilities of guessing the cryptographic keys of any algorithm such as Serpent. In essence, the attacker firstly constructs a DLCT by using building blocks such as Substitution Boxes (S-Boxes) found in many algorithms’ architectures. In depth, this study focuses on securing IoT devices against DL attacks used on Serpent algorithms by using three magic numbers mapped on a newly developed mathematical function called Blocker, which will be added on Serpent’s infrastructure before being installed in IoT devices. The new S-Boxes with 32-bit output were generated to replace the original Serpent’s S-Boxes with 4-bit output. The new S-Boxes were also inserted in Serpent’s architecture. This novel approach of using magic numbers and the Blocker Function worked successfully in this study. The results demonstrated an algorithm for which its S-Box is composed of a 4-bit-output that is more vulnerable to being attacked than an algorithm in which its S-Box comprises 32-bit outputs. The novel approach of using a Blocker, developed by three magic numbers and 32-bits output S-Boxes, successfully blocked the construction of DLCT and DL attacks. This approach managed to secure the Serpent algorithm installed on IoT devices against DL attacks. Full article
(This article belongs to the Special Issue Security for Connected Embedded Devices)
Show Figures

Figure 1

Article
Anomalous Vehicle Recognition in Smart Urban Traffic Monitoring as an Edge Service
Future Internet 2022, 14(2), 54; https://doi.org/10.3390/fi14020054 - 10 Feb 2022
Viewed by 728
Abstract
The past decades witnessed an unprecedented urbanization and the proliferation of modern information and communication technologies (ICT), which makes the concept of Smart City feasible. Among various intelligent components, smart urban transportation monitoring is an essential part of smoothly operational smart cities. Although [...] Read more.
The past decades witnessed an unprecedented urbanization and the proliferation of modern information and communication technologies (ICT), which makes the concept of Smart City feasible. Among various intelligent components, smart urban transportation monitoring is an essential part of smoothly operational smart cities. Although there is fast development of Smart Cities and the growth of Internet of Things (IoT), real-time anomalous behavior detection in Intelligent Transportation Systems (ITS) is still challenging. Because of multiple advanced features including flexibility, safety, and ease of manipulation, quadcopter drones have been widely adopted in many areas, from service improvement to urban surveillance, and data collection for scientific research. In this paper, a Smart Urban traffic Monitoring (SurMon) scheme is proposed employing drones following an edge computing paradigm. A dynamic video stream processing scheme is proposed to meet the requirements of real-time information processing and decision-making at the edge. Specifically, we propose to identify anomalous vehicle behaviors in real time by creatively applying the multidimensional Singular Spectrum Analysis (mSSA) technique in space to detect the different vehicle behaviors on roads. Multiple features of vehicle behaviors are fed into channels of the mSSA procedure. Instead of trying to create and define a database of normal activity patterns of vehicles on the road, the anomaly detection is reformatted as an outlier identifying problem. Then, a cascaded Capsules Network is designed to predict whether the behavior is a violation. An extensive experimental study has been conducted and the results have validated the feasibility and effectiveness of the SurMon scheme. Full article
Show Figures

Figure 1

Editorial
Acknowledgment to Reviewers of Future Internet in 2021
Future Internet 2022, 14(2), 53; https://doi.org/10.3390/fi14020053 - 10 Feb 2022
Viewed by 666
Abstract
Rigorous peer-reviews are the basis of high-quality academic publishing [...] Full article
Article
A Strategy-Based Formal Approach for Fog Systems Analysis
Future Internet 2022, 14(2), 52; https://doi.org/10.3390/fi14020052 - 09 Feb 2022
Viewed by 645
Abstract
Fog systems are a new emergent technology having a wide range of architectures and pronounced needs making their design complex. Consequently, the design of fog systems is crucial, including service portability and interoperability between the various elements of a system being the most [...] Read more.
Fog systems are a new emergent technology having a wide range of architectures and pronounced needs making their design complex. Consequently, the design of fog systems is crucial, including service portability and interoperability between the various elements of a system being the most essential aspects of fog computing. This article presents a fog system cross-layer architecture as a first step of such a design to provide a graphical and conceptual description. Then, a BiAgents* (Bigraphical Agents) formal model is defined to provide a rigorous description of physical, virtual, and behavioural aspects of Fog systems. Besides, this formalisation is implemented and executed under a Maude strategy system. The proposed approach is illustrated through a case study: an airport terminal Luggage Inspection System (LIS) while checking the correctness of its relevant properties: the portability of data and their interoperability. The integration of the Maude strategies in the rewriting of Fog system states made it possible to guide the execution of the model and its analysis. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

Article
Indoor Localization System Using Fingerprinting and Novelty Detection for Evaluation of Confidence
Future Internet 2022, 14(2), 51; https://doi.org/10.3390/fi14020051 - 07 Feb 2022
Viewed by 713
Abstract
Indoor localization systems are used to locate mobile devices inside buildings where traditional solutions, such as the Global Navigation Satellite Systems (GNSS), do not work well due to the lack of direct visibility to the satellites. Fingerprinting is one of the most known [...] Read more.
Indoor localization systems are used to locate mobile devices inside buildings where traditional solutions, such as the Global Navigation Satellite Systems (GNSS), do not work well due to the lack of direct visibility to the satellites. Fingerprinting is one of the most known solutions for indoor localization. It is based on the Received Signal Strength (RSS) of packets transmitted among mobile devices and anchor nodes. However, RSS values are known to be unstable and noisy due to obstacles and the dynamicity of the scenarios, causing inaccuracies in the position estimations. This instability and noise often cause the system to indicate a location that it is not quite sure is correct, although it is the most likely based on the calculations. This property of RSS can cause algorithms to return a localization with a low confidence level. If we could choose more reliable results, we would have an overall result with better quality. Thus, in our solution, we created a checking phase of the confidence level of the localization result. For this, we use the prediction probability provided by KNN and the novelty detection to discard classifications that are not very reliable and often wrong. In this work, we propose LocFiND (Localization using Fingerprinting and Novelty Detection), a fingerprint-based solution that uses prediction probability and novelty detection to evaluate the confidence of the estimated positions and mitigate inaccuracies caused by RSS in the localization phase. We implemented our solution in a real-world, large-scale school area using Bluetooth-based devices. Our performance evaluation shows considerable improvement in the localization accuracy and stability while discarding only a few, low confidence estimations. Full article
(This article belongs to the Special Issue Wireless Technology for Indoor Localization System)
Show Figures

Figure 1

Article
JoSDW: Combating Noisy Labels by Dynamic Weight
Future Internet 2022, 14(2), 50; https://doi.org/10.3390/fi14020050 - 02 Feb 2022
Viewed by 712
Abstract
The real world is full of noisy labels that lead neural networks to perform poorly because deep neural networks (DNNs) are prone to overfitting label noise. Noise label training is a challenging problem relating to weakly supervised learning. The most advanced existing methods [...] Read more.
The real world is full of noisy labels that lead neural networks to perform poorly because deep neural networks (DNNs) are prone to overfitting label noise. Noise label training is a challenging problem relating to weakly supervised learning. The most advanced existing methods mainly adopt a small loss sample selection strategy, such as selecting the small loss part of the sample for network model training. However, the previous literature stopped here, neglecting the performance of the small loss sample selection strategy while training the DNNs, as well as the performance of different stages, and the performance of the collaborative learning of the two networks from disagreement to an agreement, and making a second classification based on this. We train the network using a comparative learning method. Specifically, a small loss sample selection strategy with dynamic weight is designed. This strategy increases the proportion of agreement based on network predictions, gradually reduces the weight of the complex sample, and increases the weight of the pure sample at the same time. A large number of experiments verify the superiority of our method. Full article
(This article belongs to the Special Issue Big Data Analytics, Privacy and Visualization)
Show Figures

Figure 1

Review
Towards Crowdsourcing Internet of Things (Crowd-IoT): Architectures, Security and Applications
Future Internet 2022, 14(2), 49; https://doi.org/10.3390/fi14020049 - 31 Jan 2022
Viewed by 1107
Abstract
Crowdsourcing can play an important role in the Internet of Things (IoT) applications for information sensing and gathering where the participants are equipped with geolocated devices. Mobile crowdsourcing can be seen as a new paradigm contributing to the development of the IoT. They [...] Read more.
Crowdsourcing can play an important role in the Internet of Things (IoT) applications for information sensing and gathering where the participants are equipped with geolocated devices. Mobile crowdsourcing can be seen as a new paradigm contributing to the development of the IoT. They can be merged to form a new and essential platform in crowdsourcing IoT paradigm for data collection from different sources and communication mediums. This paper presents a comprehensive survey for this new Crowdsourcing IoT paradigm from four different perspectives: (1) Architectures for Crowd-IoT; (2) Trustworthy, Privacy and Security for Crowd-IoT; (3) Resources, Sharing, Storage and Energy Considerations for Crowd-IoT; and (4) Applications for Crowd-IoT. This survey paper aims to increase awareness and encourage continuing developments and innovations from the research community and industry towards the Crowdsourcing IoT paradigm. Full article
Show Figures

Figure 1

Article
Controlling the Trade-Off between Resource Efficiency and User Satisfaction in NDNs Based on Naïve Bayes Data Classification and Lagrange Method
Future Internet 2022, 14(2), 48; https://doi.org/10.3390/fi14020048 - 31 Jan 2022
Cited by 1 | Viewed by 989
Abstract
This paper addresses the fundamental problem of the trade-off between resource efficiency and user satisfaction in the limited environments of Named Data Networks (NDNs). The proposed strategy is named RADC (Resource Allocation based Data Classification), which aims at managing such trade-off by controlling [...] Read more.
This paper addresses the fundamental problem of the trade-off between resource efficiency and user satisfaction in the limited environments of Named Data Networks (NDNs). The proposed strategy is named RADC (Resource Allocation based Data Classification), which aims at managing such trade-off by controlling the system’s fairness index. To this end, a machine learning technique based on Multinomial Naïve Bayes is used to classify the received contents. Then, an adaptive resource allocation strategy based on the Lagrange utility function is proposed. To cache the received content, an adequate content placement and a replacement mechanism are enforced. Simulation at the system level shows that this strategy could be a powerful tool for administrators to manage the trade-off between efficiency and user satisfaction. Full article
(This article belongs to the Special Issue Recent Advances in Information-Centric Networks (ICNs))
Show Figures

Figure 1

Review
Research on Progress of Blockchain Consensus Algorithm: A Review on Recent Progress of Blockchain Consensus Algorithms
Future Internet 2022, 14(2), 47; https://doi.org/10.3390/fi14020047 - 30 Jan 2022
Cited by 2 | Viewed by 1245
Abstract
Blockchain technology can solve the problem of trust in the open network in a decentralized way. It has broad application prospects and has attracted extensive attention from academia and industry. The blockchain consensus algorithm ensures that the nodes in the chain reach consensus [...] Read more.
Blockchain technology can solve the problem of trust in the open network in a decentralized way. It has broad application prospects and has attracted extensive attention from academia and industry. The blockchain consensus algorithm ensures that the nodes in the chain reach consensus in the complex network environment, and the node status ultimately remains the same. The consensus algorithm is one of the core technologies of blockchain and plays a pivotal role in the research of blockchain technology. This article gives the basic concepts of the blockchain, summarizes the key technologies of the blockchain, especially focuses on the research of the blockchain consensus algorithm, expounds the general principles of the consensus process, and classifies the mainstream consensus algorithms. Then, focusing on the improvement of consensus algorithm performance, it reviews the research progress of consensus algorithms in detail, analyzes and compares the characteristics, suitable scenarios, and possible shortcomings of different consensus algorithms, and based on this, studies the future development trend of consensus algorithms for reference. Full article
(This article belongs to the Special Issue Distributed Systems for Emerging Computing: Platform and Application)
Show Figures

Figure 1

Article
The Framework of Cross-Domain and Model Adversarial Attack against Deepfake
Future Internet 2022, 14(2), 46; https://doi.org/10.3390/fi14020046 - 29 Jan 2022
Viewed by 956
Abstract
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on the adversarial examples [...] Read more.
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on the adversarial examples generated by a model in a domain. To improve the generalization of adversarial examples and produce better attack effects on each domain of multiple deepfake models, this paper proposes a framework of Cross-Domain and Model Adversarial Attack (CDMAA). Firstly, CDMAA uniformly weights the loss function of each domain and calculates the cross-domain gradient. Then, inspired by the multiple gradient descent algorithm (MGDA), CDMAA integrates the cross-domain gradients of each model to obtain the cross-domain perturbation vector, which is used to optimize the adversarial example. Finally, we propose a penalty-based gradient regularization method to pre-process the cross-domain gradients to improve the success rate of attacks. CDMAA experiments on four mainstream deepfake models showed that the adversarial examples generated from CDMAA have the generalizability of attacking multiple models and multiple domains simultaneously. Ablation experiments were conducted to compare the CDMAA components with the methods used in existing studies and verify the superiority of CDMAA. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Figure 1

Article
Topology Inference and Link Parameter Estimation Based on End-to-End Measurements
Future Internet 2022, 14(2), 45; https://doi.org/10.3390/fi14020045 - 28 Jan 2022
Viewed by 782
Abstract
This paper focuses on the design, implementation, experimental validation, and evaluation of a network tomography approach for performing inferential monitoring based on indirect measurements. In particular, we address the problems of inferring the routing tree topology (both logical and physical) and estimating the [...] Read more.
This paper focuses on the design, implementation, experimental validation, and evaluation of a network tomography approach for performing inferential monitoring based on indirect measurements. In particular, we address the problems of inferring the routing tree topology (both logical and physical) and estimating the links’ loss rate and jitter based on multicast end-to-end measurements from a source node to a set of destination nodes using an agglomerative clustering algorithm. The experimentally-driven evaluation of the proposed algorithm, particularly the impact of the employed reduction update scheme, takes place in real topologies constructed in an open large-scale testbed. Finally, we implement and present a motivating practical application of the proposed algorithm that combines monitoring with change point analysis to realize performance anomaly detection. Full article
(This article belongs to the Special Issue Modern Trends in Multi-Agent Systems)
Show Figures

Figure 1

Review
Intelligent Traffic Management in Next-Generation Networks
Future Internet 2022, 14(2), 44; https://doi.org/10.3390/fi14020044 - 28 Jan 2022
Viewed by 922
Abstract
The recent development of smart devices has lead to an explosion in data generation and heterogeneity. Hence, current networks should evolve to become more intelligent, efficient, and most importantly, scalable in order to deal with the evolution of network traffic. In recent years, [...] Read more.
The recent development of smart devices has lead to an explosion in data generation and heterogeneity. Hence, current networks should evolve to become more intelligent, efficient, and most importantly, scalable in order to deal with the evolution of network traffic. In recent years, network softwarization has drawn significant attention from both industry and academia, as it is essential for the flexible control of networks. At the same time, machine learning (ML) and especially deep learning (DL) methods have also been deployed to solve complex problems without explicit programming. These methods can model and learn network traffic behavior using training data/environments. The research community has advocated the application of ML/DL in softwarized environments for network traffic management, including traffic classification, prediction, and anomaly detection. In this paper, we survey the state of the art on these topics. We start by presenting a comprehensive background beginning from conventional ML algorithms and DL and follow this with a focus on different dimensionality reduction techniques. Afterward, we present the study of ML/DL applications in sofwarized environments. Finally, we highlight the issues and challenges that should be considered. Full article
(This article belongs to the Special Issue AI-Empowered Future Networks)
Show Figures

Figure 1

Article
DA-GAN: Dual Attention Generative Adversarial Network for Cross-Modal Retrieval
Future Internet 2022, 14(2), 43; https://doi.org/10.3390/fi14020043 - 27 Jan 2022
Viewed by 860
Abstract
Cross-modal retrieval aims to search samples of one modality via queries of other modalities, which is a hot issue in the community of multimedia. However, two main challenges, i.e., heterogeneity gap and semantic interaction across different modalities, have not been solved efficaciously. Reducing [...] Read more.
Cross-modal retrieval aims to search samples of one modality via queries of other modalities, which is a hot issue in the community of multimedia. However, two main challenges, i.e., heterogeneity gap and semantic interaction across different modalities, have not been solved efficaciously. Reducing the heterogeneous gap can improve the cross-modal similarity measurement. Meanwhile, modeling cross-modal semantic interaction can capture the semantic correlations more accurately. To this end, this paper presents a novel end-to-end framework, called Dual Attention Generative Adversarial Network (DA-GAN). This technique is an adversarial semantic representation model with a dual attention mechanism, i.e., intra-modal attention and inter-modal attention. Intra-modal attention is used to focus on the important semantic feature within a modality, while inter-modal attention is to explore the semantic interaction between different modalities and then represent the high-level semantic correlation more precisely. A dual adversarial learning strategy is designed to generate modality-invariant representations, which can reduce the cross-modal heterogeneity efficiently. The experiments on three commonly used benchmarks show the better performance of DA-GAN than these competitors. Full article
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia)
Show Figures

Figure 1

Article
Multi-Attribute Decision Making for Energy-Efficient Public Transport Network Selection in Smart Cities
Future Internet 2022, 14(2), 42; https://doi.org/10.3390/fi14020042 - 26 Jan 2022
Viewed by 873
Abstract
Smart cities use many smart devices to facilitate the well-being of society by different means. However, these smart devices create great challenges, such as energy consumption and carbon emissions. The proposed research lies in communication technologies to deal with big data-driven applications. Aiming [...] Read more.
Smart cities use many smart devices to facilitate the well-being of society by different means. However, these smart devices create great challenges, such as energy consumption and carbon emissions. The proposed research lies in communication technologies to deal with big data-driven applications. Aiming at multiple sources of big data in a smart city, we propose a public transport-assisted data-dissemination system to utilize public transport as another communication medium, along with other networks, with the help of software-defined technology. Our main objective is to minimize energy consumption with the maximum delivery of data. A multi-attribute decision-making strategy is adopted for the selction of the best network among wired, wireless, and public transport networks, based upon users’ requirements and different services. Once public transport is selected as the best network, the Capacitated Vehicle Routing Problem (CVRP) will be implemented to offload data onto buses as per the maximum capacity of buses. For validation, the case of Auckland Transport is used to offload data onto buses for energy-efficient delay-tolerant data transmission. Experimental results show that buses can be utilized efficiently to deliver data as per their demands and consume 33% less energy in comparison to other networks. Full article
(This article belongs to the Special Issue Software Engineering and Data Science)
Show Figures

Figure 1

Article
A Hybrid Robust-Learning Architecture for Medical Image Segmentation with Noisy Labels
Future Internet 2022, 14(2), 41; https://doi.org/10.3390/fi14020041 - 26 Jan 2022
Viewed by 782
Abstract
Deep-learning models require large amounts of accurately labeled data. However, for medical image segmentation, high-quality labels rely on expert experience, and less-experienced operators provide noisy labels. How one might mitigate the negative effects caused by noisy labels for 3D medical image segmentation has [...] Read more.
Deep-learning models require large amounts of accurately labeled data. However, for medical image segmentation, high-quality labels rely on expert experience, and less-experienced operators provide noisy labels. How one might mitigate the negative effects caused by noisy labels for 3D medical image segmentation has not been fully investigated. In this paper, our purpose is to propose a novel hybrid robust-learning architecture to combat noisy labels for 3D medical image segmentation. Our method consists of three components. First, we focus on the noisy annotations of slices and propose a slice-level label-quality awareness method, which automatically generates label-quality scores for slices in a set. Second, we propose a shape-awareness regularization loss based on distance transform maps to introduce prior shape information and provide extra performance gains. Third, based on a re-weighting strategy, we propose an end-to-end hybrid robust-learning architecture to weaken the negative effects caused by noisy labels. Extensive experiments are performed on two representative datasets (i.e., liver segmentation and multi-organ segmentation). Our hybrid noise-robust architecture has shown competitive performance, compared to other methods. Ablation studies also demonstrate the effectiveness of slice-level label-quality awareness and a shape-awareness regularization loss for combating noisy labels. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence)
Show Figures

Graphical abstract

Article
An IoT-Based COVID-19 Prevention and Control System for Enclosed Spaces
by , , and
Future Internet 2022, 14(2), 40; https://doi.org/10.3390/fi14020040 - 26 Jan 2022
Cited by 1 | Viewed by 1097
Abstract
To date, the protracted pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has had widespread ramifications for the economy, politics, public health, etc. Based on the current situation, definitively stopping the spread of the virus is infeasible in many countries. [...] Read more.
To date, the protracted pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has had widespread ramifications for the economy, politics, public health, etc. Based on the current situation, definitively stopping the spread of the virus is infeasible in many countries. This does not mean that populations should ignore the pandemic; instead, normal life needs to be balanced with disease prevention and control. This paper highlights the use of Internet of Things (IoT) for the prevention and control of coronavirus disease (COVID-19) in enclosed spaces. The proposed booking algorithm is able to control the gathering of crowds in specific regions. K-nearest neighbors (KNN) is utilized for the implementation of a navigation system with a congestion control strategy and global path planning capabilities. Furthermore, a risk assessment model is designed based on a “Sliding Window-Timer” algorithm, providing an infection risk assessment for individuals in potential contact with patients. Full article
Show Figures

Figure 1

Article
Coarse-to-Fine Entity Alignment for Chinese Heterogeneous Encyclopedia Knowledge Base
Future Internet 2022, 14(2), 39; https://doi.org/10.3390/fi14020039 - 25 Jan 2022
Viewed by 780
Abstract
Entity alignment (EA) aims to automatically determine whether an entity pair in different knowledge bases or knowledge graphs refer to the same entity in reality. Inspired by human cognitive mechanisms, we propose a coarse-to-fine entity alignment model (called CFEA) consisting of three stages: [...] Read more.
Entity alignment (EA) aims to automatically determine whether an entity pair in different knowledge bases or knowledge graphs refer to the same entity in reality. Inspired by human cognitive mechanisms, we propose a coarse-to-fine entity alignment model (called CFEA) consisting of three stages: coarse-grained, middle-grained, and fine-grained. In the coarse-grained stage, a pruning strategy based on the restriction of entity types is adopted to reduce the number of candidate matching entities. The goal of this stage is to filter out pairs of entities that are clearly not the same entity. In the middle-grained stage, we calculate the similarity of entity pairs through some key attribute values and matched attribute values, the goal of which is to identify the entity pairs that are obviously not the same entity or are obviously the same entity. After this step, the number of candidate entity pairs is further reduced. In the fine-grained stage, contextual information, such as abstract and description text, is considered, and topic modeling is carried out to achieve more accurate matching. The basic idea of this stage is to use more information to help judge entity pairs that are difficult to distinguish using basic information from the first two stages. The experimental results on real-world datasets verify the effectiveness of our model compared with baselines. Full article
(This article belongs to the Special Issue Knowledge Graph Mining and Its Applications)
Show Figures

Figure 1

Article
A Single-Rate Multicast Congestion Control (SRMCC) Mechanism in Information-Centric Networking
Future Internet 2022, 14(2), 38; https://doi.org/10.3390/fi14020038 - 25 Jan 2022
Viewed by 707
Abstract
Information-centric networking (ICN) is expected to be a candidate for future internet architecture, and it supports features such as multicast that improves bandwidth utilization and transmission efficiency. However, multicast itself does not provide congestion control. When multiple multicast groups coexist, multicast traffic may [...] Read more.
Information-centric networking (ICN) is expected to be a candidate for future internet architecture, and it supports features such as multicast that improves bandwidth utilization and transmission efficiency. However, multicast itself does not provide congestion control. When multiple multicast groups coexist, multicast traffic may exhaust all network resources, and cause network congestion and packet loss. Additionally, traditional IP multicast congestion control mechanisms cannot be directly applied to ICN architecture. Therefore, it is necessary to consider an effective congestion control mechanism for ICN multicast. This paper proposes a single-rate multicast congestion control mechanism, called SRMCC. It supports router-assisted awareness of the network congestion state and congestion control message aggregation. Moreover, the fair shared rate estimation method is innovatively proposed to achieve protocol fairness. Most importantly, it adjusts the rate according to different congestion states indicated by the queue occupancy ratio. By introducing a rate selection factor, it can achieve a balance between packet loss rate and throughput. Experimental results show that our proposal outperforms other mechanisms in throughput, packet loss rate, total bandwidth utilization, and overhead, and achieves protocol fairness and better TCP friendliness. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop