-
Call Me Maybe: Using Dynamic Protocol Switching to Mitigate Denial-of-Service Attacks on VoIP Systems
-
An Uncertainty-Driven Proactive Self-Healing Model for Pervasive Applications
-
An Efficient Information Retrieval System Using Evolutionary Algorithms
-
Protecting Chiller Systems from Cyberattack Using a Systems Thinking Approach
-
Quality of Experience Experimentation Prediction Framework through Programmable Network Management
Journal Description
Network
Network
is an international, peer-reviewed, open access journal on science and technology of networks, published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 24.8 days after submission; acceptance to publication is undertaken in 4.8 days (median values for papers published in this journal in the second half of 2022).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Network is a companion journal of Electronics.
Latest Articles
Recent Development of Emerging Indoor Wireless Networks towards 6G
Network 2023, 3(2), 269-297; https://doi.org/10.3390/network3020014 - 12 May 2023
Abstract
►
Show Figures
Sixth-generation (6G) mobile technology is currently under development, and is envisioned to fulfill the requirements of a fully connected world, providing ubiquitous wireless connectivity for diverse users and emerging applications. Transformative solutions are expected to drive the surge to accommodate a rapidly growing
[...] Read more.
Sixth-generation (6G) mobile technology is currently under development, and is envisioned to fulfill the requirements of a fully connected world, providing ubiquitous wireless connectivity for diverse users and emerging applications. Transformative solutions are expected to drive the surge to accommodate a rapidly growing number of intelligent devices and services. In this regard, wireless local area networks (WLANs) have a major role to play in indoor spaces, from supporting explosive growth in high-bandwidth applications to massive sensor arrays with diverse network requirements. Sixth-generation technology is expected to have a superconvergence of networks, including WLANs, to support this growth in applications in multiple dimensions. To this end, this paper comprehensively reviews the latest developments in diverse WLAN technologies, including WiFi, visible light communication, and optical wireless communication networks, as well as their technical capabilities. This paper also discusses how well these emerging WLANs align with supporting 6G requirements. The analyses presented in the paper provide insight into the research opportunities that need to be investigated to overcome the challenges in integrating WLANs in a 6G ecosystem.
Full article
Open AccessArticle
Clustered Distributed Learning Exploiting Node Centrality and Residual Energy (CINE) in WSNs
Network 2023, 3(2), 253-268; https://doi.org/10.3390/network3020013 - 23 Apr 2023
Abstract
►▼
Show Figures
With the explosion of big data, the implementation of distributed machine learning mechanisms in wireless sensor networks (WSNs) is becoming required for reducing the number of data traveling throughout the network and for identifying anomalies promptly and reliably. In WSNs, the above need
[...] Read more.
With the explosion of big data, the implementation of distributed machine learning mechanisms in wireless sensor networks (WSNs) is becoming required for reducing the number of data traveling throughout the network and for identifying anomalies promptly and reliably. In WSNs, the above need has to be considered along with the limited energy and processing resources available at the nodes. In this paper, we tackle the resulting complex problem by designing a multi-criteria protocol CINE that stands for “Clustered distributed learnIng exploiting Node centrality and residual Energy” for distributed learning in WSNs. More specifically, considering the energy and processing capabilities of nodes, we design a scheme that assumes that nodes are partitioned in clusters and selects a central node in each cluster, called cluster head (CH), that executes the training of the machine learning (ML) model for all the other nodes in the cluster, called cluster members (CMs). In fact, CMs are responsible for executing the inference only. Since the CH role requires the consumption of more resources, the proposed scheme rotates the CH role among all nodes in the cluster. The protocol has been simulated and tested using real environmental data sets.
Full article

Figure 1
Open AccessArticle
Improvement of Network Flow Using Multi-Commodity Flow Problem
Network 2023, 3(2), 239-252; https://doi.org/10.3390/network3020012 - 04 Apr 2023
Abstract
►▼
Show Figures
In recent years, Internet traffic has increased due to its widespread use. This can be attributed to the growth of social games on smartphones and video distribution services with increasingly high image quality. In these situations, a routing mechanism is required to control
[...] Read more.
In recent years, Internet traffic has increased due to its widespread use. This can be attributed to the growth of social games on smartphones and video distribution services with increasingly high image quality. In these situations, a routing mechanism is required to control congestion, but most existing routing protocols select a single optimal path. This causes the load to be concentrated on certain links, increasing the risk of congestion. In addition to the optimal path, the network has redundant paths leading to the destination node. In this study, we propose a multipath control with multi-commodity flow problem. Comparing the proposed method with OSPF, which is single-path control, and OSPF-ECMP, which is multipath control, we confirmed that the proposed method records higher packet arrival rates. This is expected to reduce congestion.
Full article

Figure 1
Open AccessArticle
SDN-Based Routing Framework for Elephant and Mice Flows Using Unsupervised Machine Learning
Network 2023, 3(1), 218-238; https://doi.org/10.3390/network3010011 - 02 Mar 2023
Abstract
►▼
Show Figures
Software-defined networks (SDNs) have the capabilities of controlling the efficient movement of data flows through a network to fulfill sufficient flow management and effective usage of network resources. Currently, most data center networks (DCNs) suffer from the exploitation of network resources by large
[...] Read more.
Software-defined networks (SDNs) have the capabilities of controlling the efficient movement of data flows through a network to fulfill sufficient flow management and effective usage of network resources. Currently, most data center networks (DCNs) suffer from the exploitation of network resources by large packets (elephant flow) that enter the network at any time, which affects a particular flow (mice flow). Therefore, it is crucial to find a solution for identifying and finding an appropriate routing path in order to improve the network management system. This work proposes a SDN application to find the best path based on the type of flow using network performance metrics. These metrics are used to characterize and identify flows as elephant and mice by utilizing unsupervised machine learning (ML) and the thresholding method. A developed routing algorithm was proposed to select the path based on the type of flow. A validation test was performed by testing the proposed framework using different topologies of the DCN and comparing the performance of a SDN-Ryu controller with that of the proposed framework based on three factors: throughput, bandwidth, and data transfer rate. The results show that 70% of the time, the proposed framework has higher performance for different types of flows.
Full article

Figure 1
Open AccessArticle
Machine Learning Applied to LoRaWAN Network for Improving Fingerprint Localization Accuracy in Dense Urban Areas
Network 2023, 3(1), 199-217; https://doi.org/10.3390/network3010010 - 09 Feb 2023
Abstract
►▼
Show Figures
In the area of low-power wireless networks, one technology that many researchers are focusing on relates to positioning methods such as fingerprinting in densely populated urban areas. This work presents an experimental study aimed at quantifying mean location estimation error in populated areas.
[...] Read more.
In the area of low-power wireless networks, one technology that many researchers are focusing on relates to positioning methods such as fingerprinting in densely populated urban areas. This work presents an experimental study aimed at quantifying mean location estimation error in populated areas. Using a dataset provided by the University of Antwerp, a neural network was implemented with the aim of providing end-device location. In this way, we were able to measure the mean localization error in areas of high urban density. The results obtained show a deviation of less than 150 m in locating the end device. This offset can be decreased up to a few meters, provided that there is a greater density of nodes per square meter. This result could enable Internet of Things (IoT) applications to use fingerprinting in place of energy-consuming alternatives.
Full article

Figure 1
Open AccessFeature PaperArticle
Improving Bundle Routing in a Space DTN by Approximating the Transmission Time of the Reliable LTP
by
Network 2023, 3(1), 180-198; https://doi.org/10.3390/network3010009 - 03 Feb 2023
Abstract
►▼
Show Figures
Because the operation of space networks is carefully planned, it is possible to predict future contact opportunities from link budget analysis using the anticipated positions of the nodes over time. In the standard approach to space delay-tolerant networking (DTN), such knowledge is used
[...] Read more.
Because the operation of space networks is carefully planned, it is possible to predict future contact opportunities from link budget analysis using the anticipated positions of the nodes over time. In the standard approach to space delay-tolerant networking (DTN), such knowledge is used by contact graph routing (CGR) to decide the paths for data bundles. However, the computation assumes nearly ideal channel conditions, disregarding the impact of the convergence layer retransmissions (e.g., as implemented by the Licklider transmission protocol (LTP)). In this paper, the effect of the bundle forwarding time estimation (i.e., the link service time) to routing optimality is analyzed, and an accurate expression for lossy channels is discussed. The analysis is performed first from a general and protocol-agnostic perspective, assuming knowledge of the statistical properties and general features of the contact opportunities. Then, a practical case is studied using the standard space DTN protocol, evaluating the performance improvement of CGR under the proposed forwarding time estimation. The results of this study provide insight into the optimal routing problem for a space DTN and a suggested improvement to the current routing standard.
Full article

Figure 1
Open AccessArticle
A Federated Learning-Based Approach for Improving Intrusion Detection in Industrial Internet of Things Networks
by
, , , , and
Network 2023, 3(1), 158-179; https://doi.org/10.3390/network3010008 - 30 Jan 2023
Cited by 1
Abstract
The Internet of Things (IoT) is a network of electrical devices that are connected to the Internet wirelessly. This group of devices generates a large amount of data with information about users, which makes the whole system sensitive and prone to malicious attacks
[...] Read more.
The Internet of Things (IoT) is a network of electrical devices that are connected to the Internet wirelessly. This group of devices generates a large amount of data with information about users, which makes the whole system sensitive and prone to malicious attacks eventually. The rapidly growing IoT-connected devices under a centralized ML system could threaten data privacy. The popular centralized machine learning (ML)-assisted approaches are difficult to apply due to their requirement of enormous amounts of data in a central entity. Owing to the growing distribution of data over numerous networks of connected devices, decentralized ML solutions are needed. In this paper, we propose a Federated Learning (FL) method for detecting unwanted intrusions to guarantee the protection of IoT networks. This method ensures privacy and security by federated training of local IoT device data. Local IoT clients share only parameter updates with a central global server, which aggregates them and distributes an improved detection algorithm. After each round of FL training, each of the IoT clients receives an updated model from the global server and trains their local dataset, where IoT devices can keep their own privacy intact while optimizing the overall model. To evaluate the efficiency of the proposed method, we conducted exhaustive experiments on a new dataset named Edge-IIoTset. The performance evaluation demonstrates the reliability and effectiveness of the proposed intrusion detection model by achieving an accuracy (92.49%) close to that offered by the conventional centralized ML models’ accuracy (93.92%) using the FL method.
Full article
(This article belongs to the Special Issue Networking Technologies for Cyber-Physical Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Formal Algebraic Model of an Edge Data Center with a Redundant Ring Topology
Network 2023, 3(1), 142-157; https://doi.org/10.3390/network3010007 - 30 Jan 2023
Abstract
Data center organization and optimization presents the opportunity to try and design systems with specific characteristics. In this sense, the combination of artificial intelligence methodology and sustainability may lead to achieve optimal topologies with enhanced feature, whilst taking care of the environment by
[...] Read more.
Data center organization and optimization presents the opportunity to try and design systems with specific characteristics. In this sense, the combination of artificial intelligence methodology and sustainability may lead to achieve optimal topologies with enhanced feature, whilst taking care of the environment by lowering carbon emissions. In this paper, a model for a field monitoring system has been proposed, where an edge data center topology in the form of a redundant ring has been designed for redundancy purposes to join together nodes spread apart. Additionally, a formal algebraic model of such a design has been exposed and verified.
Full article
(This article belongs to the Special Issue Emerging Networks and Systems for Edge Computing)
►▼
Show Figures

Figure 1
Open AccessArticle
IoT and Blockchain Integration: Applications, Opportunities, and Challenges
Network 2023, 3(1), 115-141; https://doi.org/10.3390/network3010006 - 24 Jan 2023
Cited by 1
Abstract
During the recent decade, two variants of evolving computing networks have augmented the Internet: (i) The Internet of Things (IoT) and (ii) Blockchain Network(s) (BCNs). The IoT is a network of heterogeneous digital devices embedded with sensors and software for various automation and
[...] Read more.
During the recent decade, two variants of evolving computing networks have augmented the Internet: (i) The Internet of Things (IoT) and (ii) Blockchain Network(s) (BCNs). The IoT is a network of heterogeneous digital devices embedded with sensors and software for various automation and monitoring purposes. A Blockchain Network is a broadcast network of computing nodes provisioned for validating digital transactions and recording the “well-formed” transactions in a unique data storage called a blockchain ledger. The power of a blockchain network is that (ideally) every node maintains its own copy of the ledger and takes part in validating the transactions. Integrating IoT and BCNs brings promising applications in many areas, including education, health, finance, agriculture, industry, and the environment. However, the complex, dynamic and heterogeneous computing and communication needs of IoT technologies, optionally integrated by blockchain technologies (if mandated), draw several challenges on scaling, interoperability, and security goals. In recent years, numerous models integrating IoT with blockchain networks have been proposed, tested, and deployed for businesses. Numerous studies are underway to uncover the applications of IoT and Blockchain technology. However, a close look reveals that very few applications successfully cater to the security needs of an enterprise. Needless to say, it makes less sense to integrate blockchain technology with an existing IoT that can serve the security need of an enterprise. In this article, we investigate several frameworks for IoT operations, the applicability of integrating them with blockchain technology, and due security considerations that the security personnel must make during the deployment and operations of IoT and BCN. Furthermore, we discuss the underlying security concerns and recommendations for blockchain-integrated IoT networks.
Full article
(This article belongs to the Special Issue Blockchain and Machine Learning for IoT: Security and Privacy Challenges)
►▼
Show Figures

Figure 1
Open AccessArticle
Edge Data Center Organization and Optimization by Using Cage Graphs
Network 2023, 3(1), 93-114; https://doi.org/10.3390/network3010005 - 18 Jan 2023
Abstract
Data center organization and optimization are increasingly receiving attention due to the ever-growing deployments of edge and fog computing facilities. The main aim is to achieve a topology that processes the traffic flows as fast as possible and that does not only depend
[...] Read more.
Data center organization and optimization are increasingly receiving attention due to the ever-growing deployments of edge and fog computing facilities. The main aim is to achieve a topology that processes the traffic flows as fast as possible and that does not only depend on AI-based computing resources, but also on the network interconnection among physical hosts. In this paper, graph theory is introduced, due to its features related to network connectivity and stability, which leads to more resilient and sustainable deployments, where cage graphs may have an advantage over the rest. In this context, the Petersen graph cage is studied as a convenient candidate for small data centers due to its small number of nodes and small network diameter, thus providing an interesting solution for edge and fog data centers.
Full article
(This article belongs to the Special Issue Advances in Edge and Cloud Computing)
►▼
Show Figures

Figure 1
Open AccessEditorial
Acknowledgment to the Reviewers of Network in 2022
Network 2023, 3(1), 91-92; https://doi.org/10.3390/network3010004 - 17 Jan 2023
Abstract
High-quality academic publishing is built on rigorous peer review [...]
Full article
Open AccessReview
On Attacking Future 5G Networks with Adversarial Examples: Survey
Network 2023, 3(1), 39-90; https://doi.org/10.3390/network3010003 - 30 Dec 2022
Abstract
The introduction of 5G technology along with the exponential growth in connected devices is expected to cause a challenge for the efficient and reliable network resource allocation. Network providers are now required to dynamically create and deploy multiple services which function under various
[...] Read more.
The introduction of 5G technology along with the exponential growth in connected devices is expected to cause a challenge for the efficient and reliable network resource allocation. Network providers are now required to dynamically create and deploy multiple services which function under various requirements in different vertical sectors while operating on top of the same physical infrastructure. The recent progress in artificial intelligence and machine learning is theorized to be a potential answer to the arising resource allocation challenges. It is therefore expected that future generation mobile networks will heavily depend on its artificial intelligence components which may result in those components becoming a high-value attack target. In particular, a smart adversary may exploit vulnerabilities of the state-of-the-art machine learning models deployed in a 5G system to initiate an attack. This study focuses on the analysis of adversarial example generation attacks against machine learning based frameworks that may be present in the next generation networks. First, various AI/ML algorithms and the data used for their training and evaluation in mobile networks is discussed. Next, multiple AI/ML applications found in recent scientific papers devoted to 5G are overviewed. After that, existing adversarial example generation based attack algorithms are reviewed and frameworks which employ these algorithms for fuzzing stat-of-art AI/ML models are summarised. Finally, adversarial example generation attacks against several of the AI/ML frameworks described are presented.
Full article
Open AccessArticle
Towards Software-Defined Delay Tolerant Networks
Network 2023, 3(1), 15-38; https://doi.org/10.3390/network3010002 - 28 Dec 2022
Abstract
This paper proposes a Software-Defined Delay Tolerant Networking (SDDTN) architecture as a solution to managing large Delay Tolerant Networking (DTN) networks in a scalable manner. This work is motivated by the planned deployments of large DTN networks on the Moon and beyond in
[...] Read more.
This paper proposes a Software-Defined Delay Tolerant Networking (SDDTN) architecture as a solution to managing large Delay Tolerant Networking (DTN) networks in a scalable manner. This work is motivated by the planned deployments of large DTN networks on the Moon and beyond in deep space. Current space communication involves relatively few nodes and is heavily deterministic and scheduled, which will not be true in the future. It is unclear how these large space DTN networks, consisting of inherently intermittent links, will be able to adapt to dynamically changing network conditions. In addition to the proposed SDDTN architecture, this paper explores data plane programming and the Programming Protocol-Independent Packet Processors (P4) language as a possible method of implementing this SDDTN architecture, enumerates the challenges of this approach, and presents intermediate results.
Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
►▼
Show Figures

Figure 1
Open AccessArticle
A Performance Evaluation of In-Memory Databases Operations in Session Initiation Protocol
Network 2023, 3(1), 1-14; https://doi.org/10.3390/network3010001 - 28 Dec 2022
Cited by 1
Abstract
►▼
Show Figures
Real-time communication has witnessed a dramatic increase in recent years in user daily usage. In this domain, Session Initiation Protocol (SIP) is a well-known protocol found to provide trusted services (voice or video) to end users along with efficiency, scalability, and interoperability. Just
[...] Read more.
Real-time communication has witnessed a dramatic increase in recent years in user daily usage. In this domain, Session Initiation Protocol (SIP) is a well-known protocol found to provide trusted services (voice or video) to end users along with efficiency, scalability, and interoperability. Just like other Internet technology, SIP stores its related data in databases with a predefined data structure. In recent, SIP technologies have adopted the real advantages of in-memory databases as cache systems to ensure fast database operations during real-time communication. Meanwhile, in industry, there are several names of in-memory databases that have been implemented with different structures (e.g., query types, data structure, persistency, and key/value size). However, there are limited resources and poor recommendations on how to select a proper in-memory database in SIP communications. This paper provides recommended and efficient in-memory databases which are most fitted to SIP servers by evaluating three types of databases including Memcache, Redis, and Local (OpenSIPS built-in). The evaluation has been conducted based on the experimental performance of the impact of in-memory operations (store and fetch) against the SIP server by applying heavy load traffic through different scenarios. To sum up, evaluation results show that the Local database consumed less memory compared to Memcached and Redis for read and write operations. While persistency was considered, Memcache is the preferable database selection due to its 25.20 KB/s for throughput and 0.763 s of call–response time.
Full article

Figure 1
Open AccessTutorial
Coexistence of Railway and Road Services by Sharing Telecommunication Infrastructure Using SDN-Based Slicing: A Tutorial
Network 2022, 2(4), 670-706; https://doi.org/10.3390/network2040038 - 01 Dec 2022
Abstract
This paper provides a detailed tutorial to develop a sandbox to emulate coexistence scenarios for road and railway services in terms of sharing telecommunication infrastructure using software-defined network (SDN) capabilities. This paper provides detailed instructions for the creation of network topology using Mininet–WiFi
[...] Read more.
This paper provides a detailed tutorial to develop a sandbox to emulate coexistence scenarios for road and railway services in terms of sharing telecommunication infrastructure using software-defined network (SDN) capabilities. This paper provides detailed instructions for the creation of network topology using Mininet–WiFi that can mimic real-life coexistence scenarios between railways and roads. The network elements are programmed and controlled by the ONOS SDN controller. The developed SDN application can differentiate the data traffic from railways and roads. Data traffic differentiation is carried out using a VLAN tagging mechanism. Further, it also provides comprehensive information about the different tools that are used to generate the data traffic that can emulate messaging, video streaming, and critical data transmission of railway and road domains. It also provides the steps to use SUMO to represent the selected coexistence scenarios in a graphical way.
Full article
(This article belongs to the Special Issue Network Slicing)
►▼
Show Figures

Figure 1
Open AccessFeature PaperArticle
Cloud Workload and Data Center Analytical Modeling and Optimization Using Deep Machine Learning
by
and
Network 2022, 2(4), 643-669; https://doi.org/10.3390/network2040037 - 18 Nov 2022
Abstract
Predicting workload demands can help to achieve elastic scaling by optimizing data center configuration, such that increasing/decreasing data center resources provides an accurate and efficient configuration. Predicting workload and optimizing data center resource configuration are two challenging tasks. In this work, we investigate
[...] Read more.
Predicting workload demands can help to achieve elastic scaling by optimizing data center configuration, such that increasing/decreasing data center resources provides an accurate and efficient configuration. Predicting workload and optimizing data center resource configuration are two challenging tasks. In this work, we investigate workload and data center modeling to help in predicting workload and data center operation that is used as an experimental environment to evaluate optimized elastic scaling for real data center traces. Three methods of machine learning are used and compared with an analytical approach to model the workload and data center actions. Our approach is to use an analytical model as a predictor to evaluate and test the optimization solution set and find the best configuration and scaling actions before applying it to the real data center. The results show that machine learning with an analytical approach can help to find the best prediction values of workload demands and evaluate the scaling and resource capacity required to be provisioned. Machine learning is used to find the optimal configuration and to solve the elasticity scaling boundary values. Machine learning helps in optimization by reducing elastic scaling violation and configuration time and by categorizing resource configuration with respect to scaling capacity values. The results show that the configuration cost and time are minimized by the best provisioning actions.
Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
►▼
Show Figures

Figure 1
Open AccessArticle
Detection of Malicious Network Flows with Low Preprocessing Overhead
by
and
Network 2022, 2(4), 628-642; https://doi.org/10.3390/network2040036 - 04 Nov 2022
Abstract
Machine learning (ML) is frequently used to identify malicious traffic flows on a network. However, the requirement of complex preprocessing of network data to extract features or attributes of interest before applying the ML models restricts their use to offline analysis of previously
[...] Read more.
Machine learning (ML) is frequently used to identify malicious traffic flows on a network. However, the requirement of complex preprocessing of network data to extract features or attributes of interest before applying the ML models restricts their use to offline analysis of previously captured network traffic to identify attacks that have already occurred. This paper applies machine learning analysis for network security with low preprocessing overhead. Raw network data are converted directly into bitmap files and processed through a Two-Dimensional Convolutional Neural Network (2D-CNN) model to identify malicious traffic. The model has high accuracy in detecting various malicious traffic flows, even zero-day attacks, based on testing with three open-source network traffic datasets. The overhead of preprocessing the network data before applying the 2D-CNN model is very low, making it suitable for on-the-fly network traffic analysis for malicious traffic flows.
Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
►▼
Show Figures

Figure 1
Open AccessFeature PaperArticle
Protecting Chiller Systems from Cyberattack Using a Systems Thinking Approach
by
and
Network 2022, 2(4), 606-627; https://doi.org/10.3390/network2040035 - 02 Nov 2022
Abstract
Recent world events and geopolitics have brought the vulnerability of critical infrastructure to cyberattacks to the forefront. While there has been considerable attention to attacks on Information Technology (IT) systems, such as data theft and ransomware, the vulnerabilities and dangers posed by industrial
[...] Read more.
Recent world events and geopolitics have brought the vulnerability of critical infrastructure to cyberattacks to the forefront. While there has been considerable attention to attacks on Information Technology (IT) systems, such as data theft and ransomware, the vulnerabilities and dangers posed by industrial control systems (ICS) have received significantly less attention. What is very different is that industrial control systems can be made to do things that could destroy equipment or even harm people. For example, in 2021 the US encountered a cyberattack on a water treatment plant in Florida that could have resulted in serious injuries or even death. These risks are based on the unique physical characteristics of these industrial systems. In this paper, we present a holistic, integrated safety and security analysis, we call Cybersafety, based on the STAMP (System-Theoretic Accident Model and Processes) framework, for one such industrial system—an industrial chiller plant—as an example. In this analysis, we identify vulnerabilities emerging from interactions between technology, operator actions as well as organizational structure, and provide recommendations to mitigate resulting loss scenarios in a systematic manner.
Full article
(This article belongs to the Special Issue Advances on Networks and Cyber Security)
►▼
Show Figures

Figure 1
Open AccessArticle
An Efficient Information Retrieval System Using Evolutionary Algorithms
Network 2022, 2(4), 583-605; https://doi.org/10.3390/network2040034 - 28 Oct 2022
Cited by 2
Abstract
When it comes to web search, information retrieval (IR) represents a critical technique as web pages have been increasingly growing. However, web users face major problems; unrelated user query retrieved documents (i.e., low precision), a lack of relevant document retrieval (i.e., low recall),
[...] Read more.
When it comes to web search, information retrieval (IR) represents a critical technique as web pages have been increasingly growing. However, web users face major problems; unrelated user query retrieved documents (i.e., low precision), a lack of relevant document retrieval (i.e., low recall), acceptable retrieval time, and minimum storage space. This paper proposed a novel advanced document-indexing method (ADIM) with an integrated evolutionary algorithm. The proposed IRS includes three main stages; the first stage (i.e., the advanced documents indexing method) is preprocessing, which consists of two steps: dataset documents reading and advanced documents indexing method (ADIM), resulting in a set of two tables. The second stage is the query searching algorithm to produce a set of words or keywords and the related documents retrieving. The third stage (i.e., the searching algorithm) consists of two steps. The modified genetic algorithm (MGA) proposed new fitness functions using a cross-point operator with dynamic length chromosomes with the adaptive function of the culture algorithm (CA). The proposed system ranks the most relevant documents to the user query by adding a simple parameter (∝) to the fitness function to guarantee the convergence solution, retrieving the most relevant user’s document by integrating MGA with the CA algorithm to achieve the best accuracy. This system was simulated using a free dataset called WebKb containing Worldwide Webpages of computer science departments at multiple universities. The dataset is composed of 8280 HTML-programed semi-structured documents. Experimental results and evaluation measurements showed 100% average precision with 98.5236% average recall for 50 test queries, while the average response time was 00.46.74.78 milliseconds with 18.8 MB memory space for document indexing. The proposed work outperforms all the literature, comparatively, representing a remarkable leap in the studied field.
Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
►▼
Show Figures

Figure 1
Open AccessFeature PaperArticle
An Uncertainty-Driven Proactive Self-Healing Model for Pervasive Applications
Network 2022, 2(4), 568-582; https://doi.org/10.3390/network2040033 - 25 Oct 2022
Abstract
The ever-increasing demand for services of end-users in the Internet of Things (IoT) often causes great congestion in the nodes dedicated to serving their requests. Such nodes are usually placed at the edge of the network, becoming the intermediates between the IoT infrastructure
[...] Read more.
The ever-increasing demand for services of end-users in the Internet of Things (IoT) often causes great congestion in the nodes dedicated to serving their requests. Such nodes are usually placed at the edge of the network, becoming the intermediates between the IoT infrastructure and Cloud. Edge nodes offer many advantages when adopted to perform processing activities that are realized close to end-users, limiting the latency in the provision of responses. In this article, we attempt to solve the problem of the potential overloading of edge nodes by proposing a mechanism that always keeps free space in their queue to host high-priority processing tasks. We introduce a proactive, self-healing mechanism that utilizes the principles of Fuzzy Logic, in combination with a non-parametric statistical method that reveals the trend of nodes’ loads as depicted by the incoming tasks and their capability to serve them in the minimum possible time. Through our approach, we manage to ensure the uninterrupted service of high-priority tasks, taking into consideration the demand for tasks as well. Based on this approach, we ensure the fastest possible delivery of results to the requestors while keeping the latency for serving high-priority tasks at the lowest possible levels. A set of experimental scenarios is adopted to evaluate the performance of the suggested model by presenting the corresponding numerical results.
Full article
(This article belongs to the Special Issue Emerging Networks and Systems for Edge Computing)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Electronics, Modelling, Network, J. Imaging
Computer Vision and Image Processing
Topic Editor: Silvia Liberata UlloDeadline: 30 June 2023
Topic in
Applied Sciences, BDCC, Future Internet, Information, Network, Sci
Social Computing and Social Network Analysis
Topic Editors: Carson K. Leung, Fei Hao, Giancarlo Fortino, Xiaokang ZhouDeadline: 31 December 2023
Topic in
Algorithms, Electronics, Information, Network, Sensors
Towards Edge-Cloud Continuum
Topic Editors: Marcin Paprzycki, Albert Zomaya, Carlos Enrique Palau Salvador, David Bader, Marek Bolanowski, Maria Ganzha, Mohamed Essaaidi, Sébastien Roland Marie Joseph Rndineau, Sri Niwas Singh, Yutaka WatanobeDeadline: 31 January 2024
Topic in
Algorithms, Computation, Information, Mathematics, Network
Complex Networks and Social Networks
Topic Editors: Jie Meng, Xiaowei Huang, Minghui Qian, Zhixuan XuDeadline: 31 July 2024

Conferences
Special Issues
Special Issue in
Network
Networking Technologies for Cyber-Physical Systems
Guest Editor: Xinrong LiDeadline: 30 June 2023
Special Issue in
Network
Advanced Technologies in Network and Service Management
Guest Editors: Hakim Mellah, Filippo MalandraDeadline: 31 July 2023
Special Issue in
Network
Advances in Edge and Cloud Computing
Guest Editors: Dawei Li, Minjun Xiao, Sadoon Azizi, Wei Chang, Ning Wang, Huan ZhouDeadline: 31 August 2023
Special Issue in
Network
IoT Sensing and Networking with UAVs
Guest Editors: Dimitris Kanellopoulos, Changsheng You, Hazim ShakhatrehDeadline: 15 September 2023