Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Review
IoT for Smart Cities: Machine Learning Approaches in Smart Healthcare—A Review
Future Internet 2021, 13(8), 218; https://doi.org/10.3390/fi13080218 - 23 Aug 2021
Cited by 142 | Viewed by 12426
Abstract
Smart city is a collective term for technologies and concepts that are directed toward making cities efficient, technologically more advanced, greener and more socially inclusive. These concepts include technical, economic and social innovations. This term has been tossed around by various actors in [...] Read more.
Smart city is a collective term for technologies and concepts that are directed toward making cities efficient, technologically more advanced, greener and more socially inclusive. These concepts include technical, economic and social innovations. This term has been tossed around by various actors in politics, business, administration and urban planning since the 2000s to establish tech-based changes and innovations in urban areas. The idea of the smart city is used in conjunction with the utilization of digital technologies and at the same time represents a reaction to the economic, social and political challenges that post-industrial societies are confronted with at the start of the new millennium. The key focus is on dealing with challenges faced by urban society, such as environmental pollution, demographic change, population growth, healthcare, the financial crisis or scarcity of resources. In a broader sense, the term also includes non-technical innovations that make urban life more sustainable. So far, the idea of using IoT-based sensor networks for healthcare applications is a promising one with the potential of minimizing inefficiencies in the existing infrastructure. A machine learning approach is key to successful implementation of the IoT-powered wireless sensor networks for this purpose since there is large amount of data to be handled intelligently. Throughout this paper, it will be discussed in detail how AI-powered IoT and WSNs are applied in the healthcare sector. This research will be a baseline study for understanding the role of the IoT in smart cities, in particular in the healthcare sector, for future research works. Full article
(This article belongs to the Special Issue AI and IoT technologies in Smart Cities)
Show Figures

Figure 1

Review
Survey of Localization for Internet of Things Nodes: Approaches, Challenges and Open Issues
Future Internet 2021, 13(8), 210; https://doi.org/10.3390/fi13080210 - 16 Aug 2021
Cited by 22 | Viewed by 3921
Abstract
With exponential growth in the deployment of Internet of Things (IoT) devices, many new innovative and real-life applications are being developed. IoT supports such applications with the help of resource-constrained fixed as well as mobile nodes. These nodes can be placed in anything [...] Read more.
With exponential growth in the deployment of Internet of Things (IoT) devices, many new innovative and real-life applications are being developed. IoT supports such applications with the help of resource-constrained fixed as well as mobile nodes. These nodes can be placed in anything from vehicles to the human body to smart homes to smart factories. Mobility of the nodes enhances the network coverage and connectivity. One of the crucial requirements in IoT systems is the accurate and fast localization of its nodes with high energy efficiency and low cost. The localization process has several challenges. These challenges keep changing depending on the location and movement of nodes such as outdoor, indoor, with or without obstacles and so on. The performance of localization techniques greatly depends on the scenarios and conditions from which the nodes are traversing. Precise localization of nodes is very much required in many unique applications. Although several localization techniques and algorithms are available, there are still many challenges for the precise and efficient localization of the nodes. This paper classifies and discusses various state-of-the-art techniques proposed for IoT node localization in detail. It includes the different approaches such as centralized, distributed, iterative, ranged based, range free, device-based, device-free and their subtypes. Furthermore, the different performance metrics that can be used for localization, comparison of the different techniques, some prominent applications in smart cities and future directions are also covered. Full article
Show Figures

Figure 1

Article
Designing a Network Intrusion Detection System Based on Machine Learning for Software Defined Networks
Future Internet 2021, 13(5), 111; https://doi.org/10.3390/fi13050111 - 28 Apr 2021
Cited by 73 | Viewed by 5671
Abstract
Software-defined Networking (SDN) has recently developed and been put forward as a promising and encouraging solution for future internet architecture. Managed, the centralized and controlled network has become more flexible and visible using SDN. On the other hand, these advantages bring us a [...] Read more.
Software-defined Networking (SDN) has recently developed and been put forward as a promising and encouraging solution for future internet architecture. Managed, the centralized and controlled network has become more flexible and visible using SDN. On the other hand, these advantages bring us a more vulnerable environment and dangerous threats, causing network breakdowns, systems paralysis, online banking frauds and robberies. These issues have a significantly destructive impact on organizations, companies or even economies. Accuracy, high performance and real-time systems are essential to achieve this goal successfully. Extending intelligent machine learning algorithms in a network intrusion detection system (NIDS) through a software-defined network (SDN) has attracted considerable attention in the last decade. Big data availability, the diversity of data analysis techniques, and the massive improvement in the machine learning algorithms enable the building of an effective, reliable and dependable system for detecting different types of attacks that frequently target networks. This study demonstrates the use of machine learning algorithms for traffic monitoring to detect malicious behavior in the network as part of NIDS in the SDN controller. Different classical and advanced tree-based machine learning techniques, Decision Tree, Random Forest and XGBoost are chosen to demonstrate attack detection. The NSL-KDD dataset is used for training and testing the proposed methods; it is considered a benchmarking dataset for several state-of-the-art approaches in NIDS. Several advanced preprocessing techniques are performed on the dataset in order to extract the best form of the data, which produces outstanding results compared to other systems. Using just five out of 41 features of NSL-KDD, a multi-class classification task is conducted by detecting whether there is an attack and classifying the type of attack (DDoS, PROBE, R2L, and U2R), accomplishing an accuracy of 95.95%. Full article
(This article belongs to the Special Issue Mobile and Wireless Network Security and Privacy)
Show Figures

Figure 1

Article
Privacy Preserving Machine Learning with Homomorphic Encryption and Federated Learning
Future Internet 2021, 13(4), 94; https://doi.org/10.3390/fi13040094 - 08 Apr 2021
Cited by 56 | Viewed by 8956
Abstract
Privacy protection has been an important concern with the great success of machine learning. In this paper, it proposes a multi-party privacy preserving machine learning framework, named PFMLP, based on partially homomorphic encryption and federated learning. The core idea is all learning parties [...] Read more.
Privacy protection has been an important concern with the great success of machine learning. In this paper, it proposes a multi-party privacy preserving machine learning framework, named PFMLP, based on partially homomorphic encryption and federated learning. The core idea is all learning parties just transmitting the encrypted gradients by homomorphic encryption. From experiments, the model trained by PFMLP has almost the same accuracy, and the deviation is less than 1%. Considering the computational overhead of homomorphic encryption, we use an improved Paillier algorithm which can speed up the training by 25–28%. Moreover, comparisons on encryption key length, the learning network structure, number of learning clients, etc. are also discussed in detail in the paper. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

Article
Characterization of the Teaching Profile within the Framework of Education 4.0
Future Internet 2021, 13(4), 91; https://doi.org/10.3390/fi13040091 - 01 Apr 2021
Cited by 31 | Viewed by 5045
Abstract
The authors of the Education 4.0 concept postulated a flexible combination of digital literacy, critical thinking, and problem-solving in educational environments linked to real-world scenarios. Therefore, teachers have been challenged to develop new methods and resources to integrate into their planning in order [...] Read more.
The authors of the Education 4.0 concept postulated a flexible combination of digital literacy, critical thinking, and problem-solving in educational environments linked to real-world scenarios. Therefore, teachers have been challenged to develop new methods and resources to integrate into their planning in order to help students develop these desirable and necessary skills; hence, the question: What are the characteristics of a teacher to consider within the framework of Education 4.0? This study was conducted in a higher education institution in Ecuador, with the aim to identify the teaching profile required in new undergraduate programs within the framework of Education 4.0 in order to contribute to decision-making about teacher recruitment, professional training and evaluation, human talent management, and institutional policies interested in connecting competencies with the needs of society. Descriptive and exploratory approaches, where we applied quantitative and qualitative instruments (surveys) to 337 undergraduate students in education programs and 313 graduates, were used. We also included interviews with 20 experts in the educational field and five focus groups with 32 chancellors, school principals, university professors, and specialists in the educational area. The data were triangulated, and the results were organized into the categories of (a) processes as facilitators (b), soft skills, (c) human sense, and (d) the use of technologies. The results outlined the profile of a professor as a specialized professional with competencies for innovation, complex problem solving, entrepreneurship, collaboration, international perspective, leadership, and connection with the needs of society. This research study may be of value to administrators, educational and social entrepreneurs, trainers, and policy-makers interested in implementing innovative training programs and in supporting management and policy decisions. Full article
Show Figures

Figure 1

Article
Research on the Impacts of Generalized Preceding Vehicle Information on Traffic Flow in V2X Environment
Future Internet 2021, 13(4), 88; https://doi.org/10.3390/fi13040088 - 30 Mar 2021
Cited by 8 | Viewed by 1741
Abstract
With the application of vehicles to everything (V2X) technologies, drivers can obtain massive traffic information and adjust their car-following behavior according to the information. The macro-characteristics of traffic flow are essentially the overall expression of the micro-behavior of drivers. There are some shortcomings [...] Read more.
With the application of vehicles to everything (V2X) technologies, drivers can obtain massive traffic information and adjust their car-following behavior according to the information. The macro-characteristics of traffic flow are essentially the overall expression of the micro-behavior of drivers. There are some shortcomings in the previous researches on traffic flow in the V2X environment, which result in difficulties to employ the related models or methods in exploring the characteristics of traffic flow affected by the information of generalized preceding vehicles (GPV). Aiming at this, a simulation framework based on the car-following model and the cellular automata (CA) is proposed in this work, then the traffic flow affected by the information of GPV is simulated and analyzed utilizing this framework. The research results suggest that the traffic flow, which is affected by the information of GPV in the V2X environment, would operate with a higher value of velocity, volume as well as jamming density and can maintain the free flow state with a much higher density of vehicles. The simulation framework constructed in this work can provide a reference for further research on the characteristics of traffic flow affected by various information in the V2X environment. Full article
Show Figures

Figure 1

Review
Distributed Ledger Technology Review and Decentralized Applications Development Guidelines
Future Internet 2021, 13(3), 62; https://doi.org/10.3390/fi13030062 - 27 Feb 2021
Cited by 32 | Viewed by 6354
Abstract
The Distributed Ledger Technology (DLT) provides an infrastructure for developing decentralized applications with no central authority for registering, sharing, and synchronizing transactions on digital assets. In the last years, it has drawn high interest from the academic community, technology developers, and startups mostly [...] Read more.
The Distributed Ledger Technology (DLT) provides an infrastructure for developing decentralized applications with no central authority for registering, sharing, and synchronizing transactions on digital assets. In the last years, it has drawn high interest from the academic community, technology developers, and startups mostly by the advent of its most popular type, blockchain technology. In this paper, we provide a comprehensive overview of DLT analyzing the challenges, provided solutions or alternatives, and their usage for developing decentralized applications. We define a three-tier based architecture for DLT applications to systematically classify the technology solutions described in over 100 papers and startup initiatives. Protocol and Network Tier contains solutions for digital assets registration, transactions, data structure, and privacy and business rules implementation and the creation of peer-to-peer networks, ledger replication, and consensus-based state validation. Scalability and Interoperability Tier solutions address the scalability and interoperability issues with a focus on blockchain technology, where they manifest most often, slowing down its large-scale adoption. The paper closes with a discussion on challenges and opportunities for developing decentralized applications by providing a multi-step guideline for decentralizing the design and implementation of traditional systems. Full article
(This article belongs to the Special Issue Blockchain: Applications, Challenges, and Solutions)
Show Figures

Figure 1

Review
A Systematic Review of Cybersecurity Risks in Higher Education
Future Internet 2021, 13(2), 39; https://doi.org/10.3390/fi13020039 - 02 Feb 2021
Cited by 21 | Viewed by 11657
Abstract
The demands for information security in higher education will continue to increase. Serious data breaches have occurred already and are likely to happen again without proper risk management. This paper applies the Comprehensive Literature Review (CLR) Model to synthesize research within cybersecurity risk [...] Read more.
The demands for information security in higher education will continue to increase. Serious data breaches have occurred already and are likely to happen again without proper risk management. This paper applies the Comprehensive Literature Review (CLR) Model to synthesize research within cybersecurity risk by reviewing existing literature of known assets, threat events, threat actors, and vulnerabilities in higher education. The review included published studies from the last twelve years and aims to expand our understanding of cybersecurity’s critical risk areas. The primary finding was that empirical research on cybersecurity risks in higher education is scarce, and there are large gaps in the literature. Despite this issue, our analysis found a high level of agreement regarding cybersecurity issues among the reviewed sources. This paper synthesizes an overview of mission-critical assets, everyday threat events, proposes a generic threat model, and summarizes common cybersecurity vulnerabilities. This report concludes nine strategic cyber risks with descriptions of frequencies from the compiled dataset and consequence descriptions. The results will serve as input for security practitioners in higher education, and the research contains multiple paths for future work. It will serve as a starting point for security researchers in the sector. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Cybersecurity Section)
Show Figures

Figure 1

Article
Using Machine Learning for Web Page Classification in Search Engine Optimization
Future Internet 2021, 13(1), 9; https://doi.org/10.3390/fi13010009 - 02 Jan 2021
Cited by 18 | Viewed by 7904
Abstract
This paper presents a novel approach of using machine learning algorithms based on experts’ knowledge to classify web pages into three predefined classes according to the degree of content adjustment to the search engine optimization (SEO) recommendations. In this study, classifiers were built [...] Read more.
This paper presents a novel approach of using machine learning algorithms based on experts’ knowledge to classify web pages into three predefined classes according to the degree of content adjustment to the search engine optimization (SEO) recommendations. In this study, classifiers were built and trained to classify an unknown sample (web page) into one of the three predefined classes and to identify important factors that affect the degree of page adjustment. The data in the training set are manually labeled by domain experts. The experimental results show that machine learning can be used for predicting the degree of adjustment of web pages to the SEO recommendations—classifier accuracy ranges from 54.59% to 69.67%, which is higher than the baseline accuracy of classification of samples in the majority class (48.83%). Practical significance of the proposed approach is in providing the core for building software agents and expert systems to automatically detect web pages, or parts of web pages, that need improvement to comply with the SEO guidelines and, therefore, potentially gain higher rankings by search engines. Also, the results of this study contribute to the field of detecting optimal values of ranking factors that search engines use to rank web pages. Experiments in this paper suggest that important factors to be taken into consideration when preparing a web page are page title, meta description, H1 tag (heading), and body text—which is aligned with the findings of previous research. Another result of this research is a new data set of manually labeled web pages that can be used in further research. Full article
(This article belongs to the Special Issue Digital Marketing and App-based Marketing)
Show Figures

Figure 1

Back to TopTop