Special Issue "Novel Algorithms and Protocols for Networks"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Electrical, Electronics and Communications Engineering".

Deadline for manuscript submissions: closed (30 September 2020).

Special Issue Editors

Prof. Dr. Davide Careglio
E-Mail Website
Guest Editor
Department of Computer Architecture, Universitat Politècnica de Catalunya, 08034 Barcelona, Spain
Interests: network optimization; novel internet architecture; energy-efficiency strategies
Special Issues, Collections and Topics in MDPI journals
Assoc. Prof. Mirosław Klinkowski
E-Mail Website
Guest Editor
Central Chamber for Telecommunication Metrology (Z-12), National Institute of Telecommunications, Warsaw, Poland
Interests: optical networking; modeling and optimization; network design
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Francesco Palmieri
E-Mail Website
Guest Editor
Department of Computer Science, University of Salerno, Via Giovanni Paolo II 132, I-84084 Fisciano, SA, Italy
Interests: high performance networking protocols and architectures; routing algorithms; network security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Today, applications can be instantiated in a number of datacenters located in different segments of the network, from the core to the edge. Users accessing these applications have stringent requirements in terms of latency, reliability, mobility, and security. Hence, application orchestration frameworks must be agile enough to deploy different instances of applications in multiple locations in real-time, following the requirements and sometimes the location of users. Consequently, the network supporting such applications must be dynamic and support multihoming and high user mobility rates, providing a low latency and secure access.

In this perspective, new key enabling technologies have recently arisen. The recent introduction of softwarization and virtualization at different levels of the network segments enables the optimization of the overall performance, versatility, efficiency, resiliency, security, and automation.

In this context, this Special Issue is targeted at the latest proposals and results in algorithms and protocols to guarantee the required level of performance and QoS guarantees by the upcoming applications envisioned in the post-5G era, including but not limited to the following topics:

  • Novel distribution of computing operations to improve application performance
  • Intelligent data storage, processing and movement
  • Smart deployment of context aware functionalities
  • Novel security-by-design architecture models
  • Novel algorithms and protocols to deploy, operate, monitor, and troubleshoot networks automatically

Assoc. Prof. Davide Careglio
Assoc. Prof. Mirosław Klinkowski
Prof. Francesco Palmieri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Network optimization
  • Distributed computing
  • Autonomous networks
  • Intelligent network planning and operation
  • QoS/QoE guarantees

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Editorial
Special Issue: Novel Algorithms and Protocols for Networks
Appl. Sci. 2021, 11(5), 2296; https://doi.org/10.3390/app11052296 - 05 Mar 2021
Viewed by 345
Abstract
Today, applications can be instantiated in a number of data centers located in different segments of the network, from the core to the edge [...] Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)

Research

Jump to: Editorial

Article
Efficient Algorithm for Providing Live Vulnerability Assessment in Corporate Network Environment
Appl. Sci. 2020, 10(21), 7926; https://doi.org/10.3390/app10217926 - 09 Nov 2020
Cited by 4 | Viewed by 895
Abstract
The time gap between public announcement of a vulnerability—its detection and reporting to stakeholders—is an important factor for cybersecurity of corporate networks. A large delay preceding an elimination of a critical vulnerability presents a significant risk to the network security and increases the [...] Read more.
The time gap between public announcement of a vulnerability—its detection and reporting to stakeholders—is an important factor for cybersecurity of corporate networks. A large delay preceding an elimination of a critical vulnerability presents a significant risk to the network security and increases the probability of a sustained damage. Thus, accelerating the process of vulnerability identification and prioritization helps to red the probability of a successful cyberattack. This work introduces a flexible system that collects information about all known vulnerabilities present in the system, gathers data from organizational inventory database, and finally integrates and processes all collected information. Thanks to application of parallel processing and non relational databases, the results of this process are available subject to a negligible delay. The subsequent vulnerability prioritization is performed automatically on the basis of the calculated CVSS 2.0 and 3.1 scores for all scanned assets. The environmental CVSS vector component is evaluated accurately thanks to the fact that the environmental data is imported directly from the organizational inventory database. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
Nxt-Freedom: Considering VDC-Based Fairness in Enforcing Bandwidth Guarantees in Cloud Datacenter
Appl. Sci. 2020, 10(21), 7874; https://doi.org/10.3390/app10217874 - 06 Nov 2020
Cited by 1 | Viewed by 425
Abstract
In the cloud datacenter, for the multi-tenant model, network resources should be fairly allocated among VDCs (virtual datacenters). Conventionally, the allocation of cloud network resources is on a best-effort basis, so the specific information of network resource allocation is unclear. Previous research has [...] Read more.
In the cloud datacenter, for the multi-tenant model, network resources should be fairly allocated among VDCs (virtual datacenters). Conventionally, the allocation of cloud network resources is on a best-effort basis, so the specific information of network resource allocation is unclear. Previous research has either aimed to provide minimum bandwidth guarantee, or focused on realizing work conservation according to the VM-to-VM (virtual machine to virtual machine) flow policy or per-source policy, or both policies. However, they failed to consider allocating redundant bandwidth among VDCs in a fair way. This paper presents a bandwidth that guarantees enforcement framework NXT-Freedom, and this framework allocates the network resources on the basis of per-VDC fairness, which can achieve work conservation. In order to guarantee per-VDC fair allocation, a hierarchical max–min fairness algorithm is put forward in this paper. In order to ensure that the framework can be applied to non-congestion-free network core and achieve scalability, NXT-Freedom decouples the computation of per-VDC allocation from the execution of allocation, but it brings some CPU overheads resulting from bandwidth enforcement. We observe that there is no need to enforce the non-blocking virtual network. Leveraging this observation, we distinguish the virtual network type of VDC to eliminate part of the CPU overheads. The evaluation results of a prototype prove that NXT-Freedom can achieve the isolation of per-VDC performance, which also shows fast adaption to flow variation in cloud datacenter. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
Latency-Aware DU/CU Placement in Convergent Packet-Based 5G Fronthaul Transport Networks
Appl. Sci. 2020, 10(21), 7429; https://doi.org/10.3390/app10217429 - 22 Oct 2020
Cited by 2 | Viewed by 882
Abstract
The 5th generation mobile networks (5G) based on virtualized and centralized radio access networks will require cost-effective and flexible solutions for satisfying high-throughput and latency requirements. The next generation fronthaul interface (NGFI) architecture is one of the main candidates to achieve it. In [...] Read more.
The 5th generation mobile networks (5G) based on virtualized and centralized radio access networks will require cost-effective and flexible solutions for satisfying high-throughput and latency requirements. The next generation fronthaul interface (NGFI) architecture is one of the main candidates to achieve it. In the NGFI architecture, baseband processing is split and performed in radio (RU), distributed (DU), and central (CU) units. The mentioned entities are virtualized and performed on general-purpose processors forming a processing pool (PP) facility. Given that the location of PPs may be spread over the network and the PPs have limited capacity, it leads to the optimization problem concerning the placement of DUs and CUs. In the NGFI network scenario, the radio data between the RU, DU, CU, and a data center (DC)—in which the traffic is aggregated—are transmitted in the form of packets over a convergent packet-switched network. Because the packet transmission is nondeterministic, special attention should be put on ensuring the appropriate quality of service (QoS) levels for the latency-sensitive traffic flows. In this paper, we address the latency-aware DU and CU placement (LDCP) problem in NGFI. LDCP concerns the placement of DU/CU entities in PP nodes for a given set of demands assuming the QoS requirements of traffic flows that are related to their latency. To this end, we make use of mixed integer linear programming (MILP) in order to formulate the LDCP optimization problem and to solve it. To assure that the latency requirements are satisfied, we apply a reliable latency model, which is included in the MILP model as a set of constraints. To assess the effectiveness of the MILP method and analyze the network performance, we run a broad set of experiments in different network scenarios. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
A Comparative Evaluation of Nature Inspired Algorithms for Telecommunication Network Design
Appl. Sci. 2020, 10(19), 6840; https://doi.org/10.3390/app10196840 - 29 Sep 2020
Cited by 4 | Viewed by 823
Abstract
The subject of the study was an application of nature-inspired metaheuristic algorithms to node configuration optimization in optical networks. The main objective of the optimization was to minimize capital expenditure, which includes the costs of optical node resources, such as transponders and amplifiers [...] Read more.
The subject of the study was an application of nature-inspired metaheuristic algorithms to node configuration optimization in optical networks. The main objective of the optimization was to minimize capital expenditure, which includes the costs of optical node resources, such as transponders and amplifiers used in a new generation of optical networks. For this purpose a model that takes into account the physical phenomena in the optical network is proposed. Selected nature-inspired metaheuristic algorithms were implemented and compared with a reference, deterministic algorithm, based on linear integer programming. For the cases studied the obtained results show that there is a large advantage in using metaheuristic algorithms. In particular, the evolutionary algorithm, the bees algorithm and the harmony search algorithm showed superior performance for the considered data-sets corresponding to large optical networks; the integer programming-based algorithm failed to find an acceptable sub-optimal solution within the assumed maximum computational time. All optimization methods were compared for selected instances of realistic teletransmission networks of different dimensions subject to traffic demand sets extracted from real traffic data. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
Spectrum Decision-Making in Collaborative Cognitive Radio Networks
Appl. Sci. 2020, 10(19), 6786; https://doi.org/10.3390/app10196786 - 28 Sep 2020
Cited by 5 | Viewed by 734
Abstract
Spectral decision making is a function of the cognitive cycle. It aims to select spectral opportunities within a set of finite possibilities. Decision-making methodologies, based on collaborative information exchanges, are used to improve the selection process. For collaborative decision making to be efficient, [...] Read more.
Spectral decision making is a function of the cognitive cycle. It aims to select spectral opportunities within a set of finite possibilities. Decision-making methodologies, based on collaborative information exchanges, are used to improve the selection process. For collaborative decision making to be efficient, decisions need to be analyzed based on the amount of information. Using little data can produce inefficient decisions, and taking a lot of data can result in high computational costs and delays. This document presents three contributions: the incorporation of a collaborative strategy for decision making, the implementation of real data, and the analysis of the amount of information through the number of failed handoffs. The collaborative model acts as a two-way information node, the information it coordinates corresponds to the Global System for Mobile Communications (GSM) band, and the amount of information to be shared is selected according to five levels of collaboration: 10%, 20%, 50%, 80%, and 100%, in which each percentage represents the total number of users that will be part of the process. The decision-making process is carried out by using two multi-criteria techniques: Feedback Fuzzy Analytical Hierarchical Process (FFAHP) and Simple Additive Weighting (SAW). The results are presented in two comparative analyses. The first performs the analysis using the number of failed handoffs. The second quantifies the level of collaboration for the number of failed handoffs. Based on the obtained percentage ratios, the information shared, and the average of increase rates, the level of collaboration that leads to efficient results is determined to be between 20% and 50% for the given number of failed handoffs. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
VNF Placement for Service Function Chains with Strong Low-Delay Restrictions in Edge Computing Networks
Appl. Sci. 2020, 10(18), 6573; https://doi.org/10.3390/app10186573 - 20 Sep 2020
Cited by 2 | Viewed by 1050
Abstract
The edge computing paradigm, allowing the location of network services close to end users, defines new network scenarios. One of them considers the existence of micro data centers, with reduced resources but located closer to service requesters, to complement remote cloud data centers. [...] Read more.
The edge computing paradigm, allowing the location of network services close to end users, defines new network scenarios. One of them considers the existence of micro data centers, with reduced resources but located closer to service requesters, to complement remote cloud data centers. This hierarchical and geo-distributed architecture allows the definition of different time constraints that can be taken into account when mapping services into data centers. This feature is especially useful in the Virtual Network Function (VNF) placement problem, where the network functions composing a Service Function Chain (SFC) may require more or less strong delay restrictions. We propose the ModPG (Modified Priority-based Greedy) heuristic, a VNF placement solution that weighs the latency, bandwidth, and resource restrictions, but also the instantiation cost of VNFs. ModPG is an improved solution of a previous proposal (called PG). Although both heuristics share the same optimization target, that is the reduction of the total substrate resource cost, the ModPG heuristic identifies and solves a limitation of the PG solution: the mapping of sets of SFCs that include a significant proportion of SFC requests with strong low-delay restrictions. Unlike PG heuristic performance evaluation, where the amount of SFC requests with strong low-delay restrictions is not considered as a factor to be analyzed, in this work, both solutions are compared considering the presence of 1%, 15%, and 25% of this type of SFC request. Results show that the ModPG heuristic optimizes the target cost similarly to the original proposal, and at the same time, it offers a better performance when a significant number of low-delay demanding SFC requests are present. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
Distributed Cell Clustering Based on Multi-Layer Message Passing for Downlink Joint Processing Coordinated Multipoint Transmission
Appl. Sci. 2020, 10(15), 5154; https://doi.org/10.3390/app10155154 - 27 Jul 2020
Cited by 2 | Viewed by 1127
Abstract
Joint processing coordinated multipoint transmission (JP-CoMP) has gained high attention as part of the effort to cope with the increasing levels of demand in the next-generation wireless communications systems. By clustering neighboring cells and with cooperative transmission within each cluster, JP-CoMP efficiently mitigates [...] Read more.
Joint processing coordinated multipoint transmission (JP-CoMP) has gained high attention as part of the effort to cope with the increasing levels of demand in the next-generation wireless communications systems. By clustering neighboring cells and with cooperative transmission within each cluster, JP-CoMP efficiently mitigates inter-cell interference and improves the overall system throughput. However, choosing the optimal clustering is formulated as a nonlinear mathematical problem, making it very challenging to find a practical solution. In this paper, we propose a distributed cell clustering algorithm that maximizes the overall throughput of the JP-CoMP scheme. The proposed algorithm renders the nonlinear mathematical problem of JP-CoMP clustering into an approximated linear formulation and introduces a multi-layer message-passing framework in order to find an efficient solution with a very low computational load. The main advantages of the proposed algorithm are that i) it enables distributed control among neighboring cells without the need for any central coordinators of the network; (ii) the computational load imposed on each cell is kept to a minimum; and, (iii) required message exchanges via backhaul result in only small levels of overhead on the network. The simulation results verify that the proposed algorithm finds an efficient JP-CoMP clustering that outperforms previous algorithms in terms of both the sum throughput and edge user throughput. Moreover, the convergence properties and the computational complexity of the proposed algorithm are compared with those of previous algorithms, confirming its usefulness in practical implementations. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
A Blockchain-Based Secure Inter-Hospital EMR Sharing System
Appl. Sci. 2020, 10(14), 4958; https://doi.org/10.3390/app10144958 - 19 Jul 2020
Cited by 8 | Viewed by 1233
Abstract
In recent years, blockchain-related technologies and applications have gradually emerged. Blockchain technology is essentially a decentralized database maintained by the collective, and it is now widely applied in various fields. At the same time, with the growth of medical technology, medical information is [...] Read more.
In recent years, blockchain-related technologies and applications have gradually emerged. Blockchain technology is essentially a decentralized database maintained by the collective, and it is now widely applied in various fields. At the same time, with the growth of medical technology, medical information is becoming increasingly important in terms of patient identity background, medical payment records, and medical history. Medical information can be the most private information about a person, but due to issues such as operation errors within the network or a hacking attack by a malicious person, there have been major leaks of sensitive personal information in the past. In any case, this has become an issue worth studying to ensure the privacy of patients and protect these medical materials. On the other hand, under the current medical system, the patient’s EMR (electronic medical record) cannot be searched across the hospital. When the patient attends the hospital for treatment, repeated examinations will occur, resulting in a waste of medical resources. Therefore, we propose a blockchain-based secure inter-hospital EMR sharing system in this article. Through the programmatic authorization mechanism by smart contracts, the security of EMR is guaranteed. In addition to the essential mutual authentication, the proposed scheme also provides and guarantees data integrity, nonrepudiation, user untraceability, forward and backward secrecy, and resistance to replay attack. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
Predeployment of Transponders for Dynamic Lightpath Provisioning in Translucent Spectrally–Spatially Flexible Optical Networks
Appl. Sci. 2020, 10(8), 2802; https://doi.org/10.3390/app10082802 - 17 Apr 2020
Cited by 3 | Viewed by 750
Abstract
We consider a dynamic lightpath provisioning problem in translucent spectrally–spatially flexible optical networks (SS-FONs) in which flexible signal regeneration is achieved with transponders operating in back-to-back (B2B) configurations. In the analyzed scenario, an important aspect that has a significant impact on the network [...] Read more.
We consider a dynamic lightpath provisioning problem in translucent spectrally–spatially flexible optical networks (SS-FONs) in which flexible signal regeneration is achieved with transponders operating in back-to-back (B2B) configurations. In the analyzed scenario, an important aspect that has a significant impact on the network performance is the decision on placement of transponders that can be used for two purposes: transmitting/receiving (add/drop) of optical signals at the source/destination nodes and regeneration of the signals at some intermediate nodes. We propose a new algorithm called scaled average used regenerators (SAUR). The key idea of the SAUR method is based on a data analytics approach, i.e., the algorithm exploits information on network traffic characteristics and the applied dynamic routing algorithm to obtain additional knowledge for the decision on transponder placement. The numerical results obtained for two representative topologies highlight that the proposed SAUR method outperforms reference algorithms in terms of the amount of traffic that can be accepted in the network. In other words, placement of transponders yielded by the SAUR method allows to increase the SS-FON throughput using only the existing resources, i.e., the network operator does not have to invest in new devices or fibers. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
Sonum: Software-Defined Synergetic Sampling Approach and Optimal Network Utilization Mechanism for Long Flow in a Data Center Network
Appl. Sci. 2020, 10(1), 171; https://doi.org/10.3390/app10010171 - 24 Dec 2019
Cited by 3 | Viewed by 826
Abstract
Long flow detection and load balancing are crucial techniques for data center running and management. However, both of them have been independently studied in previous studies. In this paper, we propose a complete solution called Sonum, which can complete long flow detection and [...] Read more.
Long flow detection and load balancing are crucial techniques for data center running and management. However, both of them have been independently studied in previous studies. In this paper, we propose a complete solution called Sonum, which can complete long flow detection and scheduling at the same time. Sonum consists of a software-defined synergetic sampling approach and an optimal network utilization mechanism. Sonum detects long flows through consolidating and processing sampling information from multiple switches. Compared with the existing prime solution, the missed detection rate of Sonum is reduced by 2.3%–5.1%. After obtaining the long flow information, Sonum minimizes the potential packet loss rate as the optimization target and then translates load balancing into an optimization problem of arranging a minimum packet loss path for long flows. This paper also introduces a heuristic algorithm for solving this optimization problem. The experimental results show that Sonum outperforms ECMP and Hedera in terms of network throughput and flow completion time. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Article
Anomaly Detection in IoT Communication Network Based on Spectral Analysis and Hurst Exponent
Appl. Sci. 2019, 9(24), 5319; https://doi.org/10.3390/app9245319 - 06 Dec 2019
Cited by 13 | Viewed by 1215
Abstract
Internet traffic monitoring is a crucial task for the security and reliability of communication networks and Internet of Things (IoT) infrastructure. This description of the traffic statistics is used to detect traffic anomalies. Nowadays, intruders and cybercriminals use different techniques to bypass existing [...] Read more.
Internet traffic monitoring is a crucial task for the security and reliability of communication networks and Internet of Things (IoT) infrastructure. This description of the traffic statistics is used to detect traffic anomalies. Nowadays, intruders and cybercriminals use different techniques to bypass existing intrusion detection systems based on signature detection and anomalies. In order to more effectively detect new attacks, a model of anomaly detection using the Hurst exponent vector and the multifractal spectrum is proposed. It is shown that a multifractal analysis shows a sensitivity to any deviation of network traffic properties resulting from anomalies. Proposed traffic analysis methods can be ideal for protecting critical data and maintaining the continuity of internet services, including the IoT. Full article
(This article belongs to the Special Issue Novel Algorithms and Protocols for Networks)
Show Figures

Figure 1

Back to TopTop