Next Article in Journal
Optimizing Energy Efficiency in Cloud Data Centers: A Reinforcement Learning-Based Virtual Machine Placement Strategy
Previous Article in Journal
Evaluation of TOPSIS Algorithm for Multi-Criteria Handover in LEO Satellite Networks: A Sensitivity Analysis
Previous Article in Special Issue
Efficient Collaborative Edge Computing for Vehicular Network Using Clustering Service
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey on Software Defined Network-Enabled Edge Cloud Networks: Challenges and Future Research Directions

by
Baha Uddin Kazi
1,*,
Md Kawsarul Islam
2,
Muhammad Mahmudul Haque Siddiqui
2 and
Muhammad Jaseemuddin
2
1
Faculty of Applied Science and Technology, Humber Polytechnic, Toronto, ON M9W 5L7, Canada
2
Department of Electrical, Computer, and Biomedical Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
*
Author to whom correspondence should be addressed.
Network 2025, 5(2), 16; https://doi.org/10.3390/network5020016
Submission received: 26 February 2025 / Revised: 30 April 2025 / Accepted: 13 May 2025 / Published: 20 May 2025
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)

Abstract

:
The explosion of connected devices and data transmission in the Internet of Things (IoT) era brings substantial burden on the capability of cloud computing. Moreover, these IoT devices are mostly positioned at the edge of a network and limited in resources. To address these challenges, edge cloud-distributed computing networks emerge. Because of the distributed nature of edge cloud networks, many research works considering software defined networks (SDNs) and network–function–virtualization (NFV) could be key enablers for managing, orchestrating, and load balancing resources. This article provides a comprehensive survey of these emerging technologies, focusing on SDN controllers, orchestration, and the function of artificial intelligence (AI) in enhancing the capabilities of controllers within the edge cloud computing networks. More specifically, we present an extensive survey on the research proposals on the integration of SDN controllers and orchestration with the edge cloud networks. We further introduce a holistic overview of SDN-enabled edge cloud networks and an inclusive summary of edge cloud use cases and their key challenges. Finally, we address some challenges and potential research directions for further exploration in this vital research area.

1. Introduction

The rapid growth of connected devices for seamless connectivity, computation, and data transmission led to the rise of the Internet of Things (IoT). By 2030, more than 200 billion connected devices will be capturing data on how we live, work, and operate machines on which we depend. This number will reach trillions by 2040 as projected in different research [1,2]. The maximum number of these devices will be positioned at the edge of the Internet, enabling new applications that will transform various aspects of our everyday life. Moreover, IoT devices are constrained by limited resources, such as storage and processing capacity that can affect the performance, security, and reliability of applications. Therefore, edge cloud-distributed computing systems can enhance response times, improve security, and save bandwidth, bringing computation and data storage nearer to the application’s location.
Cloud computing is a centralized architecture that can offer virtually unlimited computing capacity, networking, and on-demand elastic storage capability [3]. For over a decade, the elastic cloud model has been highly successful, providing significant value to both enterprises and cloud providers. End users also receive seemingly unlimited resources and data-intensive services over the wide-area network. Although centralized cloud architecture facilitates resource management and maintenance, there are difficulties for cloud computing to satisfy the services demands of new trend of latency, bandwidth, and security-sensitive applications in the IoT era. The global market for the IoT is continuously growing and is expected to grow from USD 245 billion in 2020 to USD 8131 billion by 2030 [4,5]. IoT applications, such as face recognition, ultra-high-definition video, augmented reality (AR), virtual reality (VR), voice semantic analysis, live video analytics, applications of smart city services, smart industry, smart healthcare services, personalized healthcare, and more challenged the scalability and resiliency of the traditional cloud computing models. These applications demand higher performance in terms of computing, latency, storage, and security. Additionally, the present limited bandwidth of the backbone network is insufficient to support the continuous transmission of the rapidly growing volume of data produced by increasingly demanding mobile IoT applications. To address the challenges of cloud computing, edge computing emerged, bringing computation and storage closer to the edge of the network, with a focus on reducing the latency and backbone bandwidth usages for real-time applications [3]. But edge computing suffers from limited computational capability and storage. Table 1 shows a comparison of cloud computing and edge computing based on various features. It also demonstrates the advantages and disadvantages of each.
To achieve the advantages of both cloud computing and edge computing, edge cloud architecture has emerged. The fundamental concept of edge cloud architecture is the convergence of cloud computing, edge computing, and networking to leverage the benefits of both cloud computing and edge computing paradigms. Therefore, edge cloud has significant potential to improve system-level performance to meet the requirements of the emerging real-time IoT applications. The edge cloud systems are distributed across multiple networks; thus, computing resources are considerably heterogeneous and bandwidths also vary network to network. Because of this unique nature of edge cloud networks, efficient controllers for distributed edge, offloading algorithms, resource allocation, traffic load balancing, network routing, and mobility management are crucial [3,8].
Several research works have surveyed edge cloud networks from different perspectives, as presented below. Mao et al. [9] surveyed mobile edge computing (MEC) from the perspective of communication aimed at the joint radio-and-computational resource management. Ren et al. presented a survey on end edge cloud networks, focusing on computing paradigms. They also present a comparative study on MEC, fog, cloudlet, and cloud [10]. Baktir, Ozgovde, and Ersoy [11] surveyed the architecture of multi-tier edge computing and SDN technologies. The authors in [12] conducted a survey on multi-access edge computing for enabling IoT applications. They also discussed computation offloading, resource allocation, mobility management, and security of MEC and IoT integration. Pan and McElhannon [13] presented a survey investigating the key enabling technologies from the perspective of IoT applications delivery on edge cloud. The study presented by Jamil et al. [14] described some key requirements of industrial IoT from the perspective of an edge cloud approach. They also discussed key features of various IoT platforms, such as ThingsBoard, Microsoft Azure IoT, etc., along with some open challenges. A survey on collaborative deep learning in edge cloud paradigms is presented in [15]. They also identified some open issues on collaborative deep learning in edge cloud computing. The survey paper presented by Gkonis et al. [16] discusses the IoT edge cloud (IEC) and identifies some challenges in the context of broadband wireless networks.
Although the above-mentioned surveys are inspiring, none of them specifically discussed the importance and integration of SDN-enabled controllers and orchestration in edge cloud. In light of the above, in this survey, first we extensively summarize industry-wide edge cloud use cases, killer apps, and their key challenges. After that, we introduce a holistic overview of SDN-enabled edge cloud networks. Next, we present a comprehensive survey of the research proposals on the integration of SDN controllers and orchestration for edge cloud networks. Moreover, we emphasize leveraging AI and ML to improve the capabilities of SDN controllers in the edge cloud networks. Figure 1 shows the organization of the paper. It outlines the logical structure of the research into seven main sections, each further divided into detailed subsections. This approach helps create the depth and clarity of the presented study.
The rest of the article is organized as follows. Section 2 presents some of the key use cases and deployment areas of edge cloud networks, highlighting real-world applications. Section 3 introduces the fundamental concepts of SDN-enabled edge cloud architecture. Mobile edge cloud and fixed edge cloud are also discussed in this section. Section 4 discusses the basic infrastructure of edge cloud systems. It explores storage, computing, and networking within the context of edge cloud systems. Section 5 presents an extensive review of the key enabling technologies of edge cloud systems. It also explores several proposed SDN controllers for an edge cloud environment. Section 6 discusses major challenges in realizing edge cloud, such as dynamic offloading, controller placement, Intelligent management, workload distribution, federated and interoperated service, and security. It also outlines potential research directions addressing these challenges and advance the research area. Finally, Section 7 concludes the paper.

2. Edge Cloud Deployment Areas and Use Cases

Resource management in edge cloud is extremely important. Software defined networks (SDNs) rapidly captured attention in data center network, making them a potential technology for edge cloud controllers. The key principle of the SDN is the separation of the programable control plane and forwarding plane. Recently, many researchers have proposed the use of artificial intelligence (AI) and machine learning (ML) in SDN controllers to improve resource allocation, security, traffic management, and other domains [17,18]. Therefore, a programmable intelligent SDN controller can enhance the use of network resources by learning the global view of the network and enabling data-driven decision making. Table 2 shows the potential use cases of IoT and edge cloud, and some corresponding key challenges.
The potential edge cloud applications can be found in different areas such as industrial IoT, restaurants, healthcare, retail, manufacturing, construction, transportation, agriculture, energy, and so on. It is used to aggregate, interact, filter, investigate, process, and analyze data at the boundary of the network. The primary driver behind edge computing is to enable IoT devices and IoT applications’ capabilities with the aid of edge servers [13,21]. As an example, restaurants can use predictive analytics to forecast food preparation, what is requirement now, and when more food needs to be made. IoT applications use ML- and AI-enabled analytics to foresee the quantity of clients and vehicles entering the store. So, the proclaimed essential objective of utilizing the edge is to make a trustworthy prediction. The customer is left pausing if the prediction fails (due to disconnecting) or takes excessively long (due to transferring a lot of data). To improve safety in the transportation system, the transportation sector is moving toward intelligent edge-based solutions. The edge cloud is utilized in a variety of modes of transportation such as ground, railway, and aviation [19]. For instance, to find cracks in train wheels, the railway industry places high-definition cameras in bungalows next to the track. The wheel could break due to cracks that derail the entire train. In this instance, the bandwidth requirement is very bursty in nature, which produces GBs of data in a brief amount of time. To prevent serious accidents, cracks also must be reliably detected and reported within minutes. Deployment of surveillance cameras is also increasingly used in road safety and traffic management. Therefore, video processing and analytics applications need to continuously process and analyze captured video locally or at the edge to provide decisions in milliseconds or microseconds. Edge computing has potential use cases in smart health as well. The healthcare sector generates enormous amounts of data using devices, sensors, and medical equipment. The requirement of latency, processing time, and storage is therefore very crucial. Edge computing can aid with these by utilizing automation and machine learning capabilities. It helps to better diagnose critical diseases to provide better patient care. Additionally, edge computing enables medical monitoring systems to react instantly in the present rather than having to wait for a cloud server to do so [22].

3. SDN-Enabled Edge Cloud Architecture

The edge cloud computing or multi-access edge cloud computing is a concept that involves placing compute and storage resources near to the user. This approach aims to meet the extensive demand of computing needs of emerging low-latency and data-intensive applications. It allows data-intensive complex applications to run closer to IoT devices, reducing delays related to the offloading of large data to the centralized cloud. To realize the distributed edge cloud computing architecture, networking plays the critical role. Therefore, cloudification of computing along with the networking provides an integrated vision across the fields of computing and networking [23,24]. A layered architecture view of SDN-enabled edge cloud computing and networking is depicted in Figure 2.
Multi-tenant user applications on an IoT/access device use edge cloud infrastructure through access networks. The IoT device layer could have millions of embedded systems, sensors, cameras, smart phones, drones, connected vehicles, smart houses, smart factory, robots, etc. These devices collect large-scale (MBs–PBs) data from a lot of multi-tenant applications and transfer those data to edge cloud clusters through LTE/5G/Wi-Fi/wire as multi-access networks. The access network provides high bandwidth, low latency, and a reliable link between access device(s) and the edge cloud layer. The edge cloud layer provides required computing, storage, and networking services to the user application(s). Edge clouds are distributed in nature so information or data will be processed near to the physical locations where users are connected. As a result, the latency is also reduced compared to the core cloud. The total end-to-end latency of an application depends on various parameters such as processing delay, propagation delay, transmission delay, routing delay, and switching delay (in an SDN-enabled network). For cloud only models, the total latency can be expressed as shown in Equation (1).
L C C = α     d m i n E D ,   A P + β     d A P , C C + t n o d e
In Equation (1), L C C is the total cloud latency, d m i n E D ,   A P is the distance between end device to nearest access point (AP), d A P , C C is the distance between access point and the central cloud, α and β are the proportionality constants, and t n o d e represents the total time a task spends at the central cloud. For the edge cloud network, the total latency can be represented as shown in Equation (2).
L E C = α     d m i n E D ,   E C + t n o d e + d s
In Equation (2), L E C is the total edge cloud latency, d m i n E D ,   E C is the distance between end device and edge cloud, and d s is the control plane switching latency [8].
The edge cloud layer can have two sublayers: infrastructure and virtualization and software defined network (SDN). Virtualization and SDN are the two key enabling technologies to virtualize resources and manage network, computing, and storage devices. We will discuss these two sublayers in detail later in Section 3 and Section 4. A lot of new services can be deployed at the edge sites which are close to the users or customers. This reduces the burden on the centralized cloud server and the congestion in the core networks. Thousands of IP backhaul links are deployed on the core network. Through these IP backhaul links, edge cloud clusters will be connected to the central cloud. There are several core network technologies such as IP networks, IP/MPLS, Optical networks, Satellites, cellular, SD-WAN, etc. Edge cloud to central cloud bandwidth might be limited with high latency; in some cases, links are unreliable which occurs in rural areas. Most of the data will be processed at the edge, so it will reduce the pressure on the remote center cloud. Therefore, the five key benefits of the edge cloud computing and networking are (1) lowering the amount of traffic in the core network, (2) reducing the latency for application and services, (3) improving the bandwidth, (4) scaling diverse network services, and (5) increasing operational reliability [12,13].
Based on the deployment characteristics, edge cloud can be categorized into two types: mobile edge cloud and fixed edge cloud [19]. Figure 3 shows a comparison between mobile edge cloud and fixed edge cloud with respect to some key characteristics.

3.1. Mobile Edge Cloud

Mobile edge cloud (MEC) is an idea of network architecture that brings cloud-like functionality closer to the edge of the network to improve the performance of high-throughput applications and reduce the latency. Being closer to end users, MEC enhances service delivery, but it also introduces challenges related to resource allocation and load balancing. These challenges arise due to factors like user mobility and network load. Virtualization, through the use of virtual machines and lightweight containers, has paved new ways for dynamic service migration to handle user mobility and MEC system loads. Dynamic resource transfer techniques are essential to achieve the goals of load balancing, fault tolerance, and system maintenance. Container migration, in particular, is emerging as a promising solution to enable efficient migration of resources across virtualized networks and MEC environments [9,25]. The conceptual architecture of mobile edge cloud is shown in Figure 4.

3.2. Fixed Edge Cloud

As we mentioned before, the early vision of edge computing was to enhance mobile devices’ limited local resources by neighboring edge servers to run low-latency, high bandwidth, and data-intensive applications. However, today’s mission and safety-critical industrial applications and video analytics need onsite edge deployment called edge-site or fixed-edge deployment. For example, offshore oil rigs need to analyze live video to identify safety hazards; restaurants need to forecast the amount of food to prepare based on the number of customers and vehicles entering the location; railways use high-definition cameras for detecting cracks in the wheels of a train, etc. The dominant reasons to adopt fixed edge cloud for all of the above industrial applications are the scarcity of bandwidth and outage of cloud due to unreliable network links [19]. Figure 5 presents the simplified architecture of a fixed edge cloud in a enterprise environment.
Fixed edge cloud (FEC) is a set of dedicated edge devices which are used as a primary place of storage and computation [19,26]. It consists of input devices like sensors, cameras, robots, etc., and edge clusters. Large-scale data (MBs–PBs) will be generated from the input devices, so high bandwidth links with reliable connectivity and low latency are required. A limited bandwidth in the out links can be a problem, especially in remote or rural areas.
Edge clusters collect large-scale data from the input devices and transfer those to a traditional central cloud through different transmission mechanisms like WAN, cellular, satellite, etc. But these connections have a limited bandwidth, high latency, and unreliable connectivity. Here, satellite and cellular connectivity can be used as a backup, having lower bandwidth capacity.

4. Edge Cloud Infrastructure

The concept of both MEC and FEC involves placing storage, computing, and network components near to the users. This approach aims to meet the extensive computing requirements by providing low latency, high bandwidth, and reliable network connectivity. As storage and computing resources are closer to the users, the edge computing paradigm is not a centralized architecture; instead, the edge data centers are distributed at the network edge. Moreover, in cloud computing, multiple applications are provisioned into a single server whereas in edge cloud one application may need to distribute on multiple edge servers. Therefore, in edge computing architecture, intra edge data centers (DCs), inter edge DCs, edge DCs to cloud DCs, and edge DCs to user devices networking is very important. As a result, to achieve the above goals, infrastructure, networking, resource virtualization, and control are the key considerations. In this section, we present key infrastructural components to build an edge cloud computing system [26,27].

4.1. Storage

In the edge cloud architecture, cloud DCs are still a part of its storage infrastructure. In addition to that, edge computing has distributed storage and computing (edge DC) resources placed close to the end users’ devices. It is smaller than cloud DCs typically placed in the metro network or at the edge of the metro network. In the context of MEC, it is also called a MEC server placed at the edge of the RAN. Moreover, co-operative in-network caching is also discussed in several research reports to support computational data storage and retrieval. Machine learning and deep learning are proposed for recommending cacheable content, cache placement, cache strategies, and real-time data search to improve the resource utilization [28,29]. These data are fetched from sensors or end user devices, and available for local computation. Using proactive caching can save backhaul up to 22% [30].

4.2. Computing

Like storage, edge cloud computing also brings processing power near to the user and provides faster responses, reducing the limitations of a centralized cloud server. By serving traffic at the network edge, data of the user applications can avoid traveling long distance to a cloud DC. Hence, edge cloud can decrease the transmission latency, and risk of congestion, disruption, and cyber attack. In the research work presented by Gao et al. [31], the authors reported experimental results that show edge cloud can improve response time by about 51%. Corneo et al. [24] showed, based on the placement of the edge DC, that latency can be improved by between 6% and 40%.

4.3. Networking

As we mentioned before, networking in the edge computing paradigm provides connectivity between edge DCs, user devices, and edge DCs to cloud DCs. The end users can connect to the computing and storage resources through one of the three access technologies: wired LAN, wireless LAN and cellular networks. In general, networks between edge DC to cloud DC do not have many differences from cloud architecture. In this case, connection between edge DC and cloud DC could be established by WAN, cellular, or satellite technology. Deployment of edge servers can reduce operational costs by up to 67% for bandwidth-hungry and compute-intensive applications [27,30]. Moreover, Silvestro et al. [32] demonstrated that network delay could be significantly reduced in edge cloud networks.
As we mentioned before, edge cloud DCs are heterogeneous and deployed in the proximity to customers; therefore, workloads are much more distributed, localized, and variable. To spread load and provide redundancy, the ability to connect inter edge DCs flexibly is very important. Integration of virtualization and SDN controller with edge cloud fits such requirements very well.

5. Key Enablers

Computing, networking, and storage are used to focus on data processing, data communication, and data storing, respectively. Now, because of the cloudification, all three areas are merging together. Due to the highly distributed nature of the edge clouds, resource allocation is more difficult than in a centralized data center. Moreover, edge cloud needs to handle users with significant randomness in applications. Therefore, to improve the scalability, utilization, and efficient management of computing, storage, and networking resources at the edge cloud platform, virtualization and SDNs are the two key synergistic enabling technologies. In addition, use of virtualization and SDNs in edge cloud can facilitate the access of applications, dynamic resource provisioning based on the density of the user request, migration, and security. In the next couple of sections, we discuss virtualization and SDNs in the context of edge cloud network infrastructure [13,23].
Figure 6 shows a general idea of the integration of SDNs with the edge cloud infrastructure. In the edge cloud network, the SDN controller can manage physical network devices, while simultaneously coordinating and controlling virtual resources through APIs and orchestration. We discuss more details in the following sections.

5.1. Virtualization

Virtualization is a technology that delivers network resources, services, and functions in an abstract layer, decoupling from the actual hardware. Applying virtualization techniques, edge cloud architecture transforms from a dedicated physical component to a combined physical and software-based solution. Virtualization allows creation, management, optimization, and sharing of resources (compute, storage, and network) among different applications. Virtual compute function (VCF), virtual network function (VNF), and virtual storage function (VSF) are realized as software instances on an abstract layer by leveraging commodity servers and storage. This way, VCF, VNF, and VSF could be implemented in a software environment running on one or more physical devices located in data centers, distributed network nodes, and at end user premises. This decoupling allows VCF, VNF, and VSF to relocate and instantiate at different network locations without the need to purchase and install new hardware.
Leveraging virtualization, network slicing provides multiple logical networks to operate on a shared distributed network infrastructure with the intention of providing different services with heterogenous QoS. Network slicing can be implemented using VMs, containers, and NFV instances at edge DCs. SDN controllers can be hosted at the edge data center, which can also be managed by an SDN controller. Moreover, using a network slicing approach, virtualized entities in an edge data center can be orchestrated with those in peer edge or cloud data centers for load balancing, failure recovery, and multi-site collaboration [26].
In edge networks, clients in general request a set of network functions and resources. To accommodate these requests, software-based network functions are introduced, called service functions (SFs). When these SFs are interconnected in a specific order, they are called a service function chain (SFC). The service providers deploy a SFC across a shared physical network. Therefore, a SFC could be a flexible and agile approach for adding and deleting demanded functions to significantly improve end-to-end service performance and reduce the expense of edge cloud networks [33,34].
To fully realize the potential of automated networking paradigms, there are several challenges that have been raised in many research communities and industry experts which need to be addressed. Some of these challenges are autonomous self-deploying of VNFs, self-termination of VNF, self-healing, placement of VNFs, and intelligent management and orchestration (MANO) [35,36].

5.2. Software Defined Networking

Software defined networking (SDN) is another key enabler that can offer flexible architecture for edge cloud networks by decoupling controlling and forwarding functions. The decoupling of the control plane and forwarding plane of SDNs could provide enhanced network management, control, and traffic engineering. An SDN controller is capable of dynamically computing a path based on the requirement of an application [37]. A simplified architecture of an SDN network is shown in Figure 7. As we mentioned before, edge cloud is a large-scale distributed network architecture that brings storage and computing closer to the end user. Decoupling of the control and data planes can play a crucial role in edge cloud networks for better management, control, and handling of user data. By using an SDN controller in an edge cloud network, administrators can modify the characteristics of switching devices and manage the routing tables to control data flow in the network [38,39]. Cloud providers such as Amazon, Google, and Microsoft have already started deploying SDNs in their cloud data centers [40,41,42]. Therefore, an SDN-enabled edge cloud architecture could provide dynamic and scalable network management capabilities to adapt with changing resource demands and availability of various applications, such as autonomous vehicles, smart cities, and industrial IoT [43].
The seamless communication between data plane and control plane is established by southbound interface (SBI) or southbound API (SBAPI). It facilitates the installation of flow entries, push configuration, enforce policies in the network elements (NEs), and obtaining information from NEs for better management, monitoring, and telemetry. OpenFlow protocol is a standard SBI, initially proposed by open network foundation (ONF) [44,45].
In Table 3, we present a set of related works focusing on edge cloud and integration with SDNs and NFV. The following research papers focus on portability, offloading, load balancing, task booking, resource allocation, network control with the integration of SDN, NFV, and machine learning to reduce the latency and improve the throughput and management of bandwidth, etc.
In the SDN-enabled edge cloud architecture, data planes and control planes are integrated in the access network to the central cloud. The data plane is responsible for forwarding data and the control plane is responsible for path selection, synchronization, resource management, and policy implementation. Moreover, the SDN controller supports centralized network management, providing network operators with a global visibility of the network and programming the controller for intelligent network service management by employing machine learning. Therefore, major advantages of SDN-enabled edge cloud networks include (1) network management, (2) flexibility and load balancing, (3) bandwidth allocation per flow, (4) multi-tenancy, (5) virtual application networks, (6) dynamically allocated workload, (7) adaptability and interoperability, (8) VM or container mobility management, (9) reduce latency, and (10) lower cost. In the next section, we present the software-defined controllers proposed by different researchers from industry and academia for edge cloud networks in detail.

5.3. Software-Defined Controllers for Edge Cloud

As we mentioned in the previous section, SDNs are one of the primary enabling technologies for edge cloud networks. Their network programming capability make them indispensable for controlling, monitoring, and orchestrating the network. In this section, we discuss several proposed solutions for edge cloud using an SDN controller.

5.3.1. SDN Controller for HomeCloud

HomeCloud is one of the proposed works in the direction of the convergence of NFV and SDNs for edge cloud. One of the objectives of this open framework is to develop efficient edge cloud orchestration and dynamic delivery of IoT applications. NFV is considered for facilitating computing, storage, and networking closer to the user and SDNs are considered for network configuration, control, and management. Figure 8 shows the architecture of the HomeCloud framework and the interaction among different functional units.
The SDN controller in the above architecture is responsible for programming, control, configuration, and monitoring overlay networks using southbound interfaces (SBIs) and communicating to the application controller, orchestration, and NFV infrastructure server using northbound interfaces (NBI). NFV infrastructure allocates and manages network resources such as storage, computation, bandwidth, etc. Orchestration converts SLA into real deployment of application with the collaboration of the SDN controller. Finally, the application controller keeps track and monitors application-level entities with the help of the SDN controller [64].

5.3.2. SDN Controller for Resource Allocation

The growing number of user applications and their diverse nature highly rely on the computing, storage, and network resources of edge cloud networks. Therefore, to improve the performance of edge cloud networks, resource management is one of the key functions. SDNs can enhance the use of network resources by providing a global view of the network and automating resource allocation [52]. Keeping this in mind, Zaman, Jarray, and Karmouch [52] proposed a resource allocation framework called infrastructure-as-a-service (IaaS) provisioning using SDNs. In this framework, they presented that node and link provisioning in the edge cloud can be better co-ordinated using an SDN controller. Figure 9 shows the simplified architecture of the SDN-based edge cloud resource allocation framework.
Some of the major components in the above architecture are request handler, QoS classifier, orchestrator, routing and configuration engine, and optimizer. The request handler manages each IaaS request and generates detailed information about the request. Orchestration co-ordinates with other functional components to manage resources and provision the IaaS requests across the underlying data networks. Based on the QoS classifier and available resources, the routing and configuration engine finds a set of routing paths. The optimization engine finds the optimized link and node, and communicates to the orchestrator so that the orchestrator can allocate resources efficiently to the requested IaaS. There are many other research communities also working on resource allocation in the edge cloud environment [57,65,66].

5.3.3. LSTM-Based SDN Controller for Load Prediction and Balancing

With the rise of new delay-sensitive and resource-hungry applications and services, optimizing resource allocation and load balancing in the edge cloud environment has become crucial. SDNs play a key role in enabling load balancing between computing nodes as well as distributed SDN controllers. Abderrahime et al. proposed a pre-emptive load balancing approach between controllers for delay-sensitive applications within a distributed SDN-enabled edge cloud architecture [50]. In this proactive mechanism, they use a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) model, which is capable of remembering information from prior stages to make predictions with high accuracy. The load on an SDN controller at time t , defined as L ( t ) , is a sum of requests received from the data plane components, neighboring SDN controllers, and other network entities communicating with this controller. To predict and balance the load across distributed SDN controllers, a root controller is considered. This root controller is associated with other controllers but does not directly manage any data plane components, as shown in Figure 10.
To optimize the load balancing and cost, this model is formulated as follows [50]:
m i n i m i z e X , M ω × L B + 1 ω × T m  
In the above Equation (3), weight factor ω   [ 0 ,   1 ] , L B is the load balancing factor, and T m is the total migration cost. The algorithm optimizes load balancing in the control plane and minimizes latency.
This is still an active research area and many other research and industry communities are also working on it to find an efficient solution. For example, Babou et al. [65] proposed a dynamic load balancing algorithm for home edge computing using SDNs. This work addresses the problems of controller overload using clustering and load balancing. The authors in [67] proposed a model for resource optimization and load balancing that takes into account user preferences, SLA, and cost.

5.3.4. SDN Controller for Inter Edge Communication and Bulk Transfers

The demand for online services in various industries has led to a significant increase in the size and number of data centers. According to a report, the number of hyperscale data centers has tripled since 2013, reaching over 500 in 2019 [1]. Cloud providers often build their data centers in remote locations that are close to their global customers in order to improve the performance of their services. For example, as of 2019, Google, Amazon, Facebook, and Microsoft all had data centers operating in multiple regions and cities around the world. The vast network of data centers that these companies operate allows them to support thousands of services, such as web hosting, email, and file storage, as well as streaming services for music, video, and other types of media. However, the global nature of these operations also puts pressure on the wide-area networks (WANs), often tens of terabytes to petabytes in size, between data centers, which must meet high standards for availability, performance, and consistency [48,68].
Cloud providers that operate their own wide-area networks (WANs) to connect to remote data centers often have fixed-term WAN resources and costs that they need to manage effectively. The objectives of inter-DC bulk data transfer are fairness, maximizing resource utilization, and minimizing transfer time and costs. In order to achieve these goals, industry communities such as Google, Microsoft, and Facebook have adopted SDN controllers to operate their inter-DC WANs. Figure 11 shows a simplified architecture of an SDN-based control system for inter-DC bulk transfer.
With the adoption of SDN controllers, routing and forwarding decisions are performed centrally, which allows optimal solutions. Allocating appropriate transmission rates is also very important to improve the efficiency of bulk transfer. SDN controllers can make the transfer rate flexible and adaptable to network load and topology [48].

5.3.5. Hierarchical Edge Cloud SDN Controller

The hierarchical edge cloud SDN (HECSDN) controller is a multilayer approach to improve scalability and reduce computation delays of edge cloud networks. According to the traffic load, the HECSDN controller system allocates computational tasks sharing the edge and the cloud resources [70,71]. The system comprises two main components, the edge segment and the cloud segment, as shown in Figure 12. The edge segment includes two types of controllers: first-tier central controller and second-tier local controllers. The cloud segment includes a cloud controller.
The central controller in the edge segment has a global view of traffic status in the edge network. It is responsible for distributing flow signals among the local controllers and the cloud controller. Local controllers compute the routes for incoming flow signal packets and set flow tables for all the relevant switches. They also provide traffic updates to the central controller to make sure that it maintains the real-time traffic status of the edge network.
The cloud controller in the cloud segment offers substantial computation capacity, supporting load balancing and delay reduction to satisfy QoS requirements. With the help of the cloud controller, the system can handle large amounts of traffic and greatly decrease the flow level waiting time in the control plane. The central controller continuously monitors the status of both the cloud controller and all local controllers. If input arrival alterations, controller overloads, or the application’s QoS requirements are not met, controllers request to the central controller for load redistribution. Based on the traffic status, the central controller optimizes the load arrangement and ensures the QoS requirements of the application [70,71].

5.3.6. Reinforcement-Learning-Based SDN Controller for Load Balancing

The network traffic dynamically changes due to the types of applications and services, locations of the users, and uneven demand for network resources. To handle such traffic imbalance, predicting traffic patterns is very important. However, forecasting traffic patterns is highly challenging due to short-term fluctuations, complex long-term trends, and the dynamic nature of networks. Kumari et al. [51] proposed a reinforcement learning scheme for balancing controller loads while minimizing switch migration. To determine if a controller is overloaded or not, they use the following equations.
L c j t = i = 1 n λ v i t x i j t
L I t = j = 1 k L c j ( t ) k    
L R c j t = L c j ( t ) L I t
where, L c j t is instantaneous load, L I t is ideal load, L R c j t is the ration of the instantaneous load and ideal load. λ v i is the packet-in count at a controller from switch v i in time t. If the ratio is greater than 1, it indicates an overloaded controller [51].
For learning and predicting traffic, they use a Q-learning algorithm in reinforcement learning and the following equation is the updated Q-learning rule they used in their experiment.
Q n + 1 s ,   a = 1 a n Q n s , a + a n G s , a l m G m g s , a + γ a m i n Q n ( s , a )
where, s is state and a is action or event. l m G m g s , a represents the cost acquired for taking action a in state s, weighted by the Lagrange multiplier l m . Q n s , a represents the Q-value in reinforcement learning. Through the realization of our research, we reviewed several research works closely related to this research topic and present reinforcement-learning-based approaches for computation offloading, route optimization, and resource allocation [54,58].

5.3.7. Tungsten Fabric Controller

Tungsten Fabric (TF) is an open-source multi-cloud, multi-stack SDN controller that is responsible for providing the management, control, and analytics function for virtualized, containerized, or bare-metal networks [72]. The simplified architecture of TF is shown in Figure 13.
The TF architecture is composed of two basic components, controller and vRouter.
TF Controller: It is an SDN controller that has three key components—configuration, analytics, and control. Moreover, controllers use northbound REST API to communicate with the orchestrator, GUI, and applications, and southbound BGP, Netconf, and XMPP to communicate with the vRouter and underlying network. The configuration module is responsible for configuring network topology. The configuration module can also be implemented using a policy-based or intent-based approach. The control module is responsible for communicating between vRouters, and underlying network devices. It exchanges routes to ensure the consistency of the network state. It is also responsible for steering the data flow and sending the traffic flow configuration to the vRouter. The analytics component collects real-time network data such as logs, events, flow statistics, status of the network elements, and debugging data, and analyzes and corelates them for trouble shooting and better management of the network.
vRouter: It is installed in each host that runs the workload to provide integrated routing and forwarding between virtual machines, containers, and external networks. It also enforces network and security policies. Two key components of vRouter are agent and forwarder. The vRouter agent is a user-space process that maintains XMPP sessions with the controller. It exchanges routes and receives forwarding policies with control components and reports logs, statistics, and events to the analytics component of the controller. The vRouter forwarder performs packet processing based on flows received from the agent. It drops the packet or forwards the packet to the local virtual machine or encapsulates it and sends it to another destination.
Tungsten Fabric (TF) can provide seamless networking between VMs and containers deploying in an OpenStack environment. Moreover, its features such as openness, multi-tenant networking, efficient edge replacement, distributed load balancing, and service chaining could make it a potential framework for edge cloud network deployment [72,73].
Several industry examples on edge computing platforms, including AWS Wavelength and Azure Local, incorporate software defined network (SDN) features to support the delivery of low-latency applications at the network edge. Azure Local utilize SDN controllers to create and manage virtual networks, enforce Quality of Service (QoS) policies, and ensure dynamic load balancing [74,75,76]. Similarly, AWS Wavelength Zones are deployed within telecommunication environments, where an SDN is employed to facilitate dynamic traffic routing and optimize latency-sensitive communication within localized geographic regions [77,78]. In the following section, we discussed several challenges and potential future research directions that need to be addressed in an SDN-enabled edge cloud environment.

6. Challenges and Future Research Scopes

After reviewing and presenting various related studies, we have identified a number of challenges that need further enhancement. In this section, we present some key research challenges and future directions, with a focus on the integration of SDNs in edge cloud networks.

6.1. Distributed Architecture Design for Converged NFV, SDNs, and Edge Cloud

Edge cloud is a distributed system with considerable heterogeneity in computing, storage, and network resources that is required to exchange network status such as computational resources, load, bandwidth, etc. It needs to be integrated into a common infrastructure and managed in a holistic manner. SDNs separate the control plane and data plane of networks as we discussed before. The control plane could be designed to exchange computational capability, load, bandwidth, etc., in the edge cloud system. Virtual network functions (VNFs) created by NFV can be dynamically launched or terminated based on demands [8,23]. Integrating AI with SDNs and NFV within edge cloud architecture could provide a promising solution to the challenges mentioned above [51]. Therefore, integration of an AI-enabled SDN controller, NFV, and edge cloud could increase performance and flexibility of the system using programmability, and configurability of the infrastructure layer and service layer. Moreover, convergence of SDNs, NFV, and edge cloud could potentially open a new avenue for innovation of cost-effective services and application delivery. Therefore, this is a huge potential area of future research.

6.2. Dynamic Offloading

To overcome the limitations of computation, storage, and energy of IoT or terminal devices, both academia and industries are considering data and computation offloading as a promising solution [10]. One of the primary objectives of an edge cloud network is to minimize service delay as we mentioned before. The service delay combines both execution delay and communication delay. Offloading is a technique that involves transferring computation and storage from resource-constrained end devices to resource-full edge nodes in order to enhance the performance of execution. On the other hand, offloading the computation also reduces energy consumption of end devices, which ultimately reduces the CO2 gas emissions in the environment [10,57]. Several research works proposed offloading algorithms to improve execution performance and reduce energy consumption. The workload of an application can be offloaded to either a single edge server or multiple servers for concurrent execution [3]. However, dynamically allocating workload from a mobile device to the edge clouds remains a significant challenge. Researchers from academia and industry are working on deep reinforcement learning (DRL) for dynamic offloading. The research work presented by Nieto et al. [79] proposed a distributed DRL approach to optimize task offloading decisions to maximize quality-of-experience (QoE) in edge cloud computing. Zhu et al. [80] presents an energy-efficient distributed task offloading framework for edge cloud networks. They combine a Long Short-Term Memory network (LSTM) with RL for robust decision making. Hence, AI-integrated controllers for dynamic offloading have become another key research topic in the area of SDN-enabled edge cloud networks, aiming to enhance application performance and user experience.

6.3. Federated and Interoperable Service

When the edge cloud service extends to multiple domains with multiple ownership, it becomes more complex. Design and develop protocols for interoperable services are another challenge in edge cloud networks. This is another active research topic in edge cloud networks. Some research works proposed algorithms for discovery of independent edge providers and negotiation of service deployment and agreement with the provider. Aleksandr et al. [81] proposed an algorithm to use the assets across different Internet Edge Protections (IEPs). In the study demonstrated by Maheshwari et al. [8], the authors presented a cross-domain connectivity management protocol for mobile edge cloud (MEC) in the control plane. Moreover, SDNs are evolving into software defined exchange (SDX) to apply software-defined concepts on large-scale interdomain networking operated by various organizations. SDX also includes features like application-specific peering, load balancing, and steering through network functions. Therefore, this crucial area of edge cloud presents numerous potential research opportunities.

6.4. Integration of Mobile Edge and Fixed Edge Computing

MEC provides computing powers and service environments at the edge of cellular network architecture. Mobile networks are embracing future 5G or next generation cellular networks that could provide an environment to widely deployed edge computing. A key distinction between MEC and FEC is that MEC is primarily based on cellular networks, whereas FEC relies on the Internet. Moreover, 5G network will introduce new implementation complexity because of its requirements such as ultra-dense heterogeneous networks, high-precision location aware services and ultra-low latency [23,82]. Deploying edge computing with 5G networks and integrating that with FEC could be a big challenge, which is another research area on edge cloud networks.

6.5. Application Aware Adaption and Orchestration

The remarkable rise of the IoT drives the rise of diverse applications. Edge serves as the primary storage location and computing resource for these applications and cloud is used opportunistically alongside the edge. Applications need to adapt gracefully based on the available resources in the edge as well as the priority of the applications according to the constraint and properties such as low latency, mission critical, high-volume data, etc. [19]. So, finding the appropriate adoption strategy is still an open issue for dynamically changing resources in the edge site and based on the priority of the applications. On the other hand, this end edge cloud network consists of billions of distributed connected end devices, numerous edge servers, and large-scale central servers. Nodes at different levels of networks have different abilities to compute and store, and the availability of resources dynamically change. Moreover, different deployments support different network technology such as cellular, broadband, satellite, and optical, and each technology offers different availability and performance [10,19]. Thus, more research efforts are needed to develop network orchestration mechanisms for end edge cloud systems to optimize allocation of resources ensuring the critical needs of the adaptive applications in collaborative manner.

6.6. Edge Controller Placement

One of the major issues in SDN-enabled edge cloud network architecture is controller placement. The key objective of controller placement is to minimize latency and bandwidth optimization, finding the optimum number and position of control nodes in an SDN-enabled network [59,83,84]. Some of the research works introduce the clustering approach and machine learning-based approaches to deal with this challenge. Wang et al. proposed an optimized K-means network partitioning algorithm to address this issue. In this algorithm, they partitioned the network by distributing the vertex in one of the K clusters according to the following equation [60]:
v     c l u s t e r i ,           i f   d v ,   c i < d v ,   c j ,             j 1,2 , k  
where d u ,   v denotes the shortest path between two nodes. After clustering, the algorithm updates the centroids to minimize the sum of the shortest paths from all points in the c l i s t e r i . To minimize the maximum latency between the centroid and its associated switches within subnetworks, the following formula is used.
Min max d θ i ,   S i         i k         s u c h   t h a t   θ i ,   S i   S D N i
where θ i , and S i represents controller and switches in the i t h subnetwork.
Li et al. [84] proposed a deep reinforcement learning (DRL)-based SDN controller placement algorithm for a MEC system to reduce the average communication delay. Finding the optimal number of SDN controllers and their placement using deep learning is another crucial research area in edge cloud networks.

6.7. Security and Privacy

Ensuring security and privacy is a significant challenge in edge cloud networks. Due to the cooperation of end, edge, and cloud resources and adoption of multiple technologies such as NFV and SDNs, end edge cloud networks not only inherit the security challenges of cloud computing but also introduce new ones. Adoption of virtualization technology in edge cloud environments creates VMs to share computation and storage. This brings concern of VM security in edge cloud as cloud computing. Since several applications run on shared infrastructure in end edge cloud networks, it is also crucial to address application-level security issues and proper separation of applications and data. Besides the software security, the highly distributed nature of edge cloud networks, which are closer to the users’ premises, may increase the risk of physical attacks. As future edge clouds enable greater programmability and open platforms for third-party software contribution, managing potential risks across different stakeholders becomes essential [10,13]. Another big challenge is that lightweight IoT end devices often lack sufficient resources to deploy comprehensive security mechanisms, which can make the edge servers more vulnerable. Moreover, publicly available applications could add new challenges of privacy and trust [85]. Several research papers proposed the use of AI to address unpredictable and complex security issues in edge cloud networks. Chetouane and Karout present a continual federated learning (CFL) approach to provide defense against attacks and an intrusion detection system (IDS) for SDN-based edge cloud networks [63]. Adeniyi et al. [86] proposed a hybrid autoencoder multilayer perceptron (AE–MLP) model for intrusion detection in edge computing with high accuracy. Wang et al. [87] proposed a distributed denial of service (DDoS) attack mitigation method using blockchain for the IoT. Reference [88] introduced FloodDefender, a protocol-independent framework to protect SDN-aimed DoS attacks. FloodDefender employs three techniques: table-miss engineering, packet filter, and flow table management to address denial of service attacks on both the control plane and data plane. Federated machine learning (FML) has also shown potential to defend against security attacks on edge cloud networks. A weighted FL (WFL)-based DDoS detection approach is presented in [89]. References [90,91] proposed a blockchain-enabled SDN architecture for secure IoT data transfer. Advanced persistence thread (ATP) is another vulnerable area for edge cloud networks. A model integrating FL and graph convolution network (GCN) for detecting APT in SDNs in presented in [92]. Therefore, AI-empowered security and privacy solutions are another area of future research in SDN-enabled end edge cloud networks.

6.8. Workload Distribution and Flow Control

Machine learning (ML) algorithms can predict outcomes or classify objects using training data and improve their performance with repeated experience on the task. ML has achieved great success in several areas, including speech recognition, web search, purchase recommendation, encrypted network traffic classification, etc. Today, IoT or end devices generate unprecedented volumes of data that need to gather, process, and analyze, which consumes very high network resources. Therefore, machine learning has the potential to help resource management, workload distribution, and flow control in an SDN-enabled edge cloud network [18,57,93,94]. Researchers in [18] proposed a hybrid deep learning model for effective distribution of workload for edge cloud IoT networks. Wang et al. [95] proposed a pre-emptive algorithm for scheduling distributed ML jobs in edge cloud networks to minimize the average job completion time. In the study presented by Ryoichi et al. [96], they proposed a framework for flow control in SDN edge cloud networks that enables real-time predictions based on IoT sensor data. In this framework, they used a Random Forest machine learning method. Researchers from industry and academia started working in this area but still lots of work needs to be done.

6.9. Intelligent Management and Orchestration

There are several challenges yet to be addressed regarding the management and orchestration (MANO) of virtualization such as placement, chaining, scaling, and fault diagnosis. In the study presented by Rui et al. [97], the authors proposed a federated DRL-based algorithm for SFC orchestration. A machine learning-based autonomous framework for predicting and placing VNFs is proposed by Bunyakitanon et al. [36]. Reference [98] introduces an intelligent orchestration for edge cloud networks. This orchestration is built on the OpenNebula SDN controller and machine learning models. Intelligent management and orchestration (iMANO) is another compelling research area in edge cloud networks.

7. Conclusions

Cloud computing is a centralized architecture that offers virtually unlimited computing power and storage, typically deployed in a remote data center. However, this approach often experiences high latency, high bandwidth usage, and less security when supporting emerging real-time applications. On the other hand, edge computing is a distributed architecture that brings computing, storage, and networking resources closer to the IoT devices. Edge computing helps reduce latency, reduce bandwidth consumption, and enhance security. But it is limited by relatively small storage capacity and computation power. To bridge the technical gap between these two paradigms, edge cloud architecture has emerged as a promising solution.
Despite the significant potential of edge cloud architecture, several technical challenges remain to be addressed to fully harness its benefits. Through our literature review, we have provided a comprehensive list of deployment areas, applications, and corresponding challenges of edge cloud. To overcome these challenges, many research works propose the integration of AI-enabled SDN controllers with edge cloud to enhance network management, resource allocation, task offloading, controller placement, security, etc. Therefore, to provide an overview on how AI-assisted SDN controllers can be integrated with edge cloud, this survey paper depicted the architecture of SDN-enabled edge cloud. We also have presented a comprehensive survey on proposed solutions on the integration of intelligent SDN controllers and orchestration with the edge cloud networks for improving network management, computation offloading, dynamic load balancing, resource management, and security. Finally, this paper discussed the key research challenges that we believe need to be addressed along with potential future research directions.

Author Contributions

Conceptualization, B.U.K. and M.K.I.; investigation, B.U.K., M.K.I., M.M.H.S. and M.J.; writing—original draft preparation, B.U.K., M.K.I. and M.M.H.S.; writing—review and editing, B.U.K. and M.J.; supervision, B.U.K. and M.J. All authors have read and agreed to the published version of the manuscript.

Funding

Supported by NSERC Discovery Grant.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

List of Abbreviations

AbbreviationDescription
SDNSoftware Defined Networking
AIArtificial Intelligence
NFVNetwork Function Virtualization
ARAugmented Reality
VRVirtual Reality
MLMachine Learning
PDAPersonal Digital Assistant
MECMobile Edge Computing
MB-PBMegabyte-Petabyte
LTELong Term Evolution
FECFixed Edge Cloud
DCDatacenter
RANRadio Access Network
VCFVirtual Compute Function
VNFVirtual Network Function
VSFVirtual Storage Function
QOSQuality of Service
VMVirtual Machine
SFCService Function Chain
MANOManagement and Orchestration
IOTInternet of Things
LANLocal Area Network
DRLDeep Reinforcement Learning
OLSROptimized Link State Routing
LSTMLong-Short-Term Memory
FLFederated Learning
SW-WANSoftware Defined Wide Area Network
SBISouthbound Interface
NBINorthbound Interface
SLAService Level Agreement
RNNRecurrent Neutral Network
WANWide Area Network
HECSDNHierarchical Edge Cloud SDN
TFTungsten Fabric
REST APIRepresentational State Transfer Application Programming Interface
GUIGraphical User Interface
BGPBorder Gateway Protocol
XMPPExtensible Messaging and Presence Protocol
QOEQuality of Experience
IEPInternet Edge Protection
SDXSoftware Defined Exchange
CFLContinual Federated Learning
IDSIntrusion Detection System
AE-MLPAutoencoder Multi Layer Perceptron
DDOSDistributed Denial of Service
WFLWeighted Flow Length
FMLFederated Machine Learning
GCNGraph Convolution Network
APTAdvanced Persistent Threat

References

  1. Institute for the Future. The Hyperconnected World. 2020. Available online: https://legacy.iftf.org/fileadmin/user_upload/downloads/ourwork/IFTF_Hyperconnected_World_2020.pdf (accessed on 20 March 2022).
  2. The National Intelligence Council. Global Trends 2040 a More Connected World. March 2021. Available online: https://www.dni.gov/files/ODNI/documents/assessments/GlobalTrends_2040.pdf (accessed on 20 March 2022).
  3. Wang, J.; Pan, J.; Esposito, F.; Calyam, P.; Yang, Z.; Mohapatra, P. Edge Cloud Offloading Algorithms: Issues, Methods, and Perspectives. ACM Comput. Surv. 2019, 52, 1–23. [Google Scholar] [CrossRef]
  4. The Internet of Things (IoT): An Overview. 15 October 2015. Available online: https://www.internetsociety.org/resources/doc/2015/iot-overview/ (accessed on 18 September 2021).
  5. Al-Sarawi, S.; Anbar, M.; Abdullah, R.; Al Hawari, A.B. Internet of things market analysis forecasts, 2020–2030. In Proceedings of the Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4), London, UK, 27–28 July 2020. [Google Scholar]
  6. Asim, M.; Wang, Y.; Wang, K.; Huang, P.Q. A review on computational intelligence techniques in cloud and edge computing. IEEE Trans. Emerg. Top. Comput. Intell. 2020, 4, 742–763. [Google Scholar] [CrossRef]
  7. Verma, A.; Verma, V. Comparative study of cloud computing and edge computing: Three level architecture models and security challenges. Int. J. Distrib. Cloud Comput. 2021, 9, 13–17. [Google Scholar]
  8. Maheshwari, S.; Raychaudhuri, D.; Seskar, I.; Bronzino, F. Scalability and Performance Evaluation of Edge Cloud Systems for Latency Constrained Applications; IEEE Xplore: Seattle, WA, USA, 2018. [Google Scholar]
  9. Mao, Y.; You, J.; Zhang, J.; Huang, K.; Letaief, K.B. A survey on mobile edge computing: The communication perspective. IEEE Commun. Surv. Tutor. 2017, 19, 2322–2358. [Google Scholar] [CrossRef]
  10. Ren, J.; Zhang, D.; He, S.; Zhang, Y.; Li, T. A Survey on End-Edge-Cloud Orchestrated Network Computing Paradigms: Transparent Computing, Mobile Edge Computing, Fog Computing, and Cloudlet. ACM Comput. Surv. 2019, 52, 1036. [Google Scholar] [CrossRef]
  11. Baktir, A.C.; Ozgovde, A.; Ersoy, C. How Can Edge Computing Benefit from Software-Defined Networking: A Survey, Use Cases, and Future Directions. IEEE Commun. Surv. Tutor. 2017, 19, 2359–2391. [Google Scholar] [CrossRef]
  12. Porambage, P.; Okwuibe, J.; Liyanage, M.; Ylianttila, M.; Taleb, T. Survey on multi-access edge computing for internet of things realization. IEEE Commun. Surv. Tutor. 2018, 20, 2961–2991. [Google Scholar] [CrossRef]
  13. Pan, J.; McElhannon, J. Future Edge Cloud and Edge Computing for Internet of Things Applications. IEEE Internet Things J. 2018, 5, 439–449. [Google Scholar] [CrossRef]
  14. Jamil, M.N.; Schelén, O.; Monrat, A.A.; Andersson, K. Enabling Industrial Internet of Things by Leveraging Distributed Edge-to-Cloud Computing: Challenges and Opportunities. IEEE Access 2024, 12, 127294–127308. [Google Scholar] [CrossRef]
  15. Wang, Y.; Yang, C.; Lan, S.; Zhu, L.; Zhang, Y. End-edge-cloud collaborative computing for deep learning: A comprehensive survey. IEEE Commun. Surv. Tutor. 2024, 26, 2647–2683. [Google Scholar] [CrossRef]
  16. Gkonis, P.; Giannopoulos, A.; Trakadas, P.; Masip-Bruin, X.; D’Andria, F. A survey on IoT-edge-cloud continuum systems: Status, challenges, use cases, and open issues. Future Internet 2023, 15, 383. [Google Scholar] [CrossRef]
  17. Cifuentes, B.J.O.; Suárez, Á.; Pineda, V.G.; Jaimes, R.A.; Benitez, A.O.M.; Bustamante, J.D.G. Analysis of the use of artificial intelligence in software-defined intelligent networks: A survey. Technologies 2024, 12, 99. [Google Scholar] [CrossRef]
  18. Lilhore, U.K.; Simaiya, S.; Sharma, Y.K.; Rai, A.K.; Padmaja, S.M.; Nabilal, K.V.; Kumar, V.; Alroobaea, R.; Alsufyani, H. Cloud-edge hybrid deep learning framework for scalable IoT resource optimization. J. Cloud Comput. 2025, 14, 5. [Google Scholar] [CrossRef]
  19. Noghabi, S.A.; Cox, L.; Agarwal, S.; Ananthanarayanam, G. The Emerging Landscape of Edge Computing. GetMobile Mob. Comput. Commun. 2019, 23, 11–20. [Google Scholar] [CrossRef]
  20. Ananthanarayanam, G.; Bahl, P.; Bodik, P.; Chintalapudi, K.; Philipose, M.; Ravindranath, L.; Sinha, S. Real-Time Video Analytics: The Killer App for Edge Computing. IEEE Comput. 2017, 50, 58–67. [Google Scholar] [CrossRef]
  21. Amin, A.A.; Munna, A.S.; Shaikh, M.S.I.; Kazi, B.U. Role of the Internet of Things (IoT) Applications in Business and Marketing. In Contemporary Approaches of Digital Marketing and the Role of Machine Intelligence; IGI Global: Hershey, PA, USA, 2023; pp. 105–122. [Google Scholar]
  22. Available online: https://geekflare.com/?s=edge+computing+and+its+applications (accessed on 25 March 2023).
  23. Duan, Q.; Wang, S.; Ansari, N. Convergence of networking and cloud/edge computing: Status, challenges, and opportunities. IEEE Netw. 2020, 34, 148–155. [Google Scholar] [CrossRef]
  24. Corneo, L.; Nitinder, M.; Aleksandr, Z.; Walter, W.; Christian, R.; Per, G.; Jussi, K. (How Much) Can Edge Computing Change Network Latency? In Proceedings of the 2021 IFIP Networking Conference (IFIP Networking), Espoo and Helsinki, Finland, 21–24 June 2021. [Google Scholar]
  25. Maheshwari, S. Mobile Edge Cloud Architecture for Future Low-Latency Applications. Ph.D. Thesis, Rutgers The State University of New Jersey, New Brunswick, NJ, USA, 2020. [Google Scholar]
  26. Zhao, Y.; Wang, W.; Meixner, C.C.; Tornatore, M.; Zhang, J. Edge Computing and Networking: A Survey on Infrastructures and Applications. IEEE Access 2019, 7, 101213–101230. [Google Scholar] [CrossRef]
  27. Mohan, N. Edge Computing Platforms and Protocols. Ph.D. Thesis, Helsingin yliopisto, Helsinki, Finland, 2019. [Google Scholar]
  28. Sun, C.; Li, X.; Wen, J.; Wang, X.; Han, Z.; Leung, V.C. Federated Deep Reinforcement Learning for Recommendation-Enabled Edge Caching in Mobile Edge-Cloud Computing Networks. IEEE J. Sel. Areas Commun. 2023, 41, 690–705. [Google Scholar] [CrossRef]
  29. Liu, M.; Li, D.; Wu, H.; Lyu, F.; Shen, X.S. Cooperative edge-cloud caching for real-time sensing big data search in vehicular networks. In Proceedings of the ICC 2021-IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021. [Google Scholar]
  30. Wang, S.; Zhang, X.; Zhang, Y.; Wang, L.; Yang, J.; Wang, W. A survey on Mobile Edge Networks: Convergence of computing, caching and communications. IEEE Access 2017, 5, 6757–6779. [Google Scholar] [CrossRef]
  31. Gao, Y.; Hu, W.; Ha, K.; Amos, B.; Pillai, P.; Satyanarayanan, M. Are Cloudlets Necessary? School of Computer Science, Carnegie Mellon University: Pittsburgh, PA, USA, 2015. [Google Scholar]
  32. Silvestro, A.; Mohan, N.; Kangasharju, J.; Schneider, F.; Fu, X. Mute: Multi-tier edge networks. In Proceedings of the 5th Workshop on CrossCloud Infrastructures & Platforms, Porto, Portugal, 23–26 April 2018. [Google Scholar]
  33. Halpern, J.; Pignataro, C. RFC 7665—Service Function Chaining (SFC) Architecture; RFC Editor: Marina del Rey, CA, USA, 2018. [Google Scholar]
  34. Zheng, D.; Shen, G.; Li, Y.; Cao, X.; Mukherjee, B. Service function chaining and embedding with heterogeneous faults tolerance in edge networks. IEEE Trans. Netw. Serv. Manag. 2022, 20, 2157–2171. [Google Scholar] [CrossRef]
  35. Marinas, D.M.; Shami, A. The Need for Advanced Intelligence in NFV Management and Orchestration. IEEE Netw. 2021, 35, 365–371. [Google Scholar]
  36. Bunyakitanon, M.; Da Silva, A.P.; Vasilakos, X.; Nejabati, R.; Simeonidou, D. Auto-3P: An autonomous VNF performance prediction & placement framework based on machine learning. Comput. Netw. 2020, 181, 107433. [Google Scholar]
  37. SDN Architecture; ONF TR-502. Open Networking Foundation: Palo Alto, CA, USA, 2014; Available online: https://opennetworking.org/wp-content/uploads/2013/02/TR_SDN_ARCH_1.0_06062014.pdf (accessed on 1 May 2025).
  38. OpenFlow-Switch, version 1.5.1 (Protocol, version 0x06). ONF TS-025; The Open Networking Foundation: Palo Alto, CA, USA, 2015; Available online: https://opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.5.1.pdf (accessed on 1 May 2025).
  39. Hu, F.; Bao, K. A Survey on Software-Defined Network and OpenFlow: From Concept to Implementation. IEEE Commun. Surv. Tutor. 2014, 16, 2181–2206. [Google Scholar] [CrossRef]
  40. Son, J.; Rajkumar, B. A taxonomy of software-defined networking (SDN)-enabled cloud computing. ACM Comput. Surv. (CSUR) 2018, 51, 1–36. [Google Scholar] [CrossRef]
  41. Vahdat, A.; David, C.; Jennifer, R. A purpose-built global network: Google’s move to SDN. Commun. ACM 2016, 59, 46–54. [Google Scholar] [CrossRef]
  42. Jain, V.; Yatri, V.K.; Kapoor, C. Software defined networking: State-of-the-art. J. High Speed Netw. 2019, 25, 1–40. [Google Scholar] [CrossRef]
  43. Alomari, A.H.; Subramaniam, S.K.; Samian, N.; Latip, R.; Zukarnain, Z. Towards Optimal Efficiencies in Software Defined Network SDN-Edge Cloud: Performance Evaluation of Load Balancing Algorithms. In Proceedings of the IEEE 1st International Conference on Advanced Engineering and Technologies (ICONNIC), Kediri, Indonesia, 14 October 2023. [Google Scholar]
  44. Boukraa, L.; Mahrach, S.; El Makkaoui, K.; Esbai, R. SDN southbound protocols: A comparative study. In Proceedings of the International Conference on Networking, Intelligent Systems and Security, Bandung, Indonesia, 30–31 March 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 407–418. [Google Scholar]
  45. Gupta, N.; Maashi, M.S.; Tanwar, S.; Badotra, S.; Aljebreen, M.; Bharany, S. A comparative study of software defined networking controllers using mininet. Electronics 2022, 11, 2715. [Google Scholar] [CrossRef]
  46. Ceselli, A.; Premoli, M.; Secci, S. Mobile Edge Cloud Network Design Optimization. IEEE/ACM Trans. Netw. 2017, 25, 1818–1831. [Google Scholar] [CrossRef]
  47. Huang, C.-M.; Chiang, M.-S.; Dao, D.-T.; Su, W.-L.; Xu, S.; Zhou, H. V2V Data Offloading for Cellular Network Based on the Software Defined Network (SDN) Inside Mobile Edge Computing (MEC) Architecture. IEEE Access 2018, 6, 17741–17755. [Google Scholar] [CrossRef]
  48. Luo, L.; Yu, H.; Forester, K.-T.; Noormohammadpour, M.; Schmid, S. Inter-Datacenter Bulk Transfers: Trends and Challenges. IEEE Netw. 2020, 34, 240–246. [Google Scholar] [CrossRef]
  49. Hakiri, A.; Sellami, B.; Patil, P.; Berthou, P.; Gokhale, A. Managing wireless fog networks using software-defined networking. In Proceedings of the IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), Hammamet, Tunisia, 30 October–3 November 2017. [Google Scholar]
  50. Filali, A.; Mlika, Z.; Cherkaoui, S.; Kobbane, A. Preemptive SDN Load Balancing with Machine Learning for Delay Sensitive Applications. IEEE Trans. Veh. Technol. 2020, 69, 15947–15963. [Google Scholar] [CrossRef]
  51. Kumari, A.; Roy, A.; Sairam, A.S. Optimizing SDN controller load balancing using online reinforcement learning. IEEE Access 2024, 12, 131591–131604. [Google Scholar] [CrossRef]
  52. Zaman, F.A.; Jarray, A.; Karmouch, A. Software Defined Network-Based Edge Cloud Resource Allocation Framework. IEEE Access 2019, 7, 10672–10690. [Google Scholar] [CrossRef]
  53. Gorlatch, S.; Humernbrum, T.; Glinka, F. Improving QoS in real-time internet applications: From best-effort to Software-Defined Networks. In Proceedings of the International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 3–6 February 2014. [Google Scholar]
  54. Cui, T.; Yang, R.; Fang, C.; Yu, S. Deep reinforcement learning-based resource allocation for content distribution in IoT-edge-cloud computing environments. Symmetry 2023, 15, 217. [Google Scholar] [CrossRef]
  55. Dai, M.; Su, Z.; Li, R.; Yu, S. A Software-Defined-Networking-Enabled Approach for Edge-Cloud Computing in the Internet of Things. IEEE Netw. 2021, 35, 66–73. [Google Scholar] [CrossRef]
  56. Yang, C.; Lan, S.; Wang, L.; Shen, W.; Huang, G.G. Big data driven edge-cloud collaboration architecture for cloud manufacturing: A software defined perspective. IEEE Access 2020, 8, 2020–45950. [Google Scholar] [CrossRef]
  57. Thang, L.D.; Rafael, G.L.; Paolo, C.; Per-Olov, Ö. Machine Learning Methods for Reliable Resource Provisioning in Edge-Cloud Computing: A Survey. ACM Comput. Surv. 2019, 52, 39. [Google Scholar]
  58. Suzuki, A.; Kobayashi, M.; Oki, E. Multi-agent deep reinforcement learning for cooperative computing offloading and route optimization in multi cloud-edge networks. IEEE Trans. Netw. Serv. Manag. 2023, 20, 4416–4434. [Google Scholar] [CrossRef]
  59. Wang, G.; Zhao, Y.; Huang, J.; Duan, Q.; Li, J. A K-means-based network partition algorithm for controller placement in software defined network. In Proceedings of the IEEE International Conference on Communications (ICC), Kuala Lumpur, Malaysia, 22–27 May 2016. [Google Scholar]
  60. Xu, C.; Xu, C.; Li, B.; Li, S.; Li, T. Load-aware dynamic controller placement based on deep reinforcement learning in SDN-enabled mobile cloud-edge computing networks. Comput. Netw. 2023, 234, 109900. [Google Scholar] [CrossRef]
  61. Zhou, P.; Wu, G.; Alzahrani, B.; Barnawi, A.; Alhindi, A.; Chen, M. Reinforcement learning for task placement in collaborative cloud-edge computing. In Proceedings of the IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 7–11 December 2021. [Google Scholar]
  62. Chetouane, A.; Karoui, K. New Continual Federated Learning System for Intrusion Detection in SDN-Based Edge Computing. Concurr. Comput. Pract. Exp. 2024, 37, e8332. [Google Scholar] [CrossRef]
  63. Absardi, Z.N.; Javidan, R. IoT traffic management using deep learning based on osmotic cloud to edge computing. Telecommun. Syst. 2024, 87, 419–435. [Google Scholar] [CrossRef]
  64. Pan, J.; Lin, M.; Ravishankar, R.; Peyman, T. HomeCloud: An edge cloud framework and testbed for new application delivery. In Proceedings of the 23rd International Conference on Telecommunications (ICT), Thessaloniki, Greece, 16–18 May 2016. [Google Scholar]
  65. Babou, C.S.M.; Fall, D.; Shigeru, K.; Yuzo, T.; Monowar, H.B.; Ibrahima, N.; Ibrahima, D.; Youki, K. D-LBAH: Dynamic Load Balancing Algorithm for HEC-SDN systems. In Proceedings of the 8th International Conference on Future Internet of Things and Cloud (FiCloud), Rome, Italy, 23–25 August 2021. [Google Scholar]
  66. Chen, Q.; Kuang, Z.; Zhao, L. Multiuser computation offloading and resource allocation for cloud–edge heterogeneous network. IEEE Internet Things J. 2021, 9, 3799–3811. [Google Scholar] [CrossRef]
  67. Li, C.; Tang, J.; Luo, Y. Service cost-based resource optimization and load balancing for edge and cloud environment. Knowl. Inf. Syst. 2020, 62, 2020–4275. [Google Scholar] [CrossRef]
  68. Lin, X.; Shao, J.; Liu, R.S.; Hu, W. Performance and cost of upstream resource allocation for inter-edge-datacenter bulk transfers. In Proceedings of the 2020 IEEE/CIC International Conference on Communications in China (ICCC), Chongqing, China, 9–11 August 2020. [Google Scholar]
  69. Kwak, J.; Le, L.B.; Iosifidis, G.; Lee, K.; Kim, D.I. Collaboration of Network Operators and Cloud Providers in Software-Controlled Networks. IEEE Netw. 2020, 34, 98–105. [Google Scholar] [CrossRef]
  70. Lin, F.P.-C.; Tsai, Z. Hierarchical Edge-Cloud SDN Controller System with Optimal Adaptive Resource Allocation for Load-Balancing. IEEE Syst. J. 2020, 14, 265–276. [Google Scholar] [CrossRef]
  71. Tong, L.; Li, Y.; Gao, W. A hierarchical edge cloud architecture for mobile computing. In Proceedings of the IEEE INFOCOM 2016–The 35th Annual IEEE International Conference on Computer Communications, San Francisco, CA, USA, 10–14 April 2016. [Google Scholar]
  72. Tungsten Fabric Architecture. Available online: https://tungstenfabric.github.io/website/Tungsten-Fabric-Architecture.html (accessed on 1 September 2024).
  73. Moledo, S.P.; Rawat, A.; Gurtov, A. Vendor-independent software-defined networking. In Proceedings of the IEEE 2nd International Conference on Signal, Control and Communication (SCC), Hammamet, Tunisia, 20–22 December 2021. [Google Scholar]
  74. Microsoft. Software Defined Networking (SDN) in Azure Local. 2025. Available online: https://learn.microsoft.com/en-us/azure/azure-local/concepts/software-defined-networking-23h2?view=azloc-2503 (accessed on 18 April 2025).
  75. Microsoft. Introducing Azure Local: Cloud Infrastructure for Distributed Locations Enabled by Azure Arc. 2024. Available online: https://techcommunity.microsoft.com/blog/azurearcblog/introducing-azure-local-cloud-infrastructure-for-distributed-locations-enabled-b/4296017 (accessed on 1 April 2025).
  76. Microsoft. What Is Azure Private Multi-Access Edge Compute? 2024. Available online: https://learn.microsoft.com/en-us/previous-versions/azure/private-multi-access-edge-compute-mec/overview (accessed on 1 April 2025).
  77. AWS. New AWS Wavelength Zone in Toronto—The First in Canada. 2022. Available online: https://aws.amazon.com/blogs/aws/new-aws-wavelength-zone-in-toronto-the-first-in-canada/#:~:text=Wireless%20communication%20has%20put%20us,and%20in%2Dvehicle%20entertainment%20experiences (accessed on 1 April 2025).
  78. Adamson, C. AWS Wavelength for Ultra-Low Latency Applications. 2024. Available online: https://medium.com/@christopheradamson253/aws-wavelength-for-ultra-low-latency-applications-f043ea2d3577#:~:text=AWS%20Wavelength%20allows%20you%20to,AR%2FVR%2C%20and%20more (accessed on 1 April 2025).
  79. Nieto, G.; de la Iglesia, I.; Lopez-Novoa, U.; Perfecto, C. Deep Reinforcement Learning techniques for dynamic task offloading in the 5G edge-cloud continuum. J. Cloud Comput. 2024, 13, 94. [Google Scholar] [CrossRef]
  80. Zhu, K.; Li, S.; Zhang, X.; Wang, J.; Xie, C.; Wu, F.; Xie, R. An Energy-Efficient Dynamic Offloading Algorithm for Edge Computing Based on Deep Reinforcement Learning. IEEE Access 2024, 12, 127489–127506. [Google Scholar] [CrossRef]
  81. Aleksandr, Z.; Nitinder, M.; Suzan, B.; Walter, W.; Kangasharju, J. ExEC: Elastic Extensible Edge Cloud. In Proceedings of the EdgeSys ’19: Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking, Dresden, Germany, 25 March 2019. [Google Scholar]
  82. Kazi, B.U.; Wainer, G.A. Next generation wireless cellular networks: Ultra-dense multi-tier and multi-cell cooperation perspective. Wirel. Netw. 2019, 25, 2041–2064. [Google Scholar] [CrossRef]
  83. Soleymanifar, R.; Carolyn, B.A.S.; Srinivasa, S. A Clustering Approach to Edge Controller Placement in Software-Defined Networks with Cost Balancing. IFAC-PapersOnLine 2020, 53, 2642–2647. [Google Scholar] [CrossRef]
  84. Li, C.; Liu, J.; Ma, N.; Zhang, Q.; Zhong, Z.; Jiang, L.; Jia, G. Deep reinforcement learning based controller placement and optimal edge selection in SDN-based multi-access edge computing environments. J. Parallel Distrib. Comput. 2024, 193, 104948. [Google Scholar] [CrossRef]
  85. Julien, G.; Florian, B.; Rolf, E.; Tim, G.; Max, M. What the Fog? Edge Computing Revisited: Promises, Applications and Future Challenges. IEEE Access 2019, 7, 152847–152878. [Google Scholar]
  86. Adeniyi, O.; Sadiq, A.S.; Pillai, P.; Aljaidi, M.; Kaiwartya, O. Securing mobile edge computing using hybrid deep learning method. Computers 2024, 13, 25. [Google Scholar] [CrossRef]
  87. Wang, S.; Zhang, J.; Zhang, T. AI-enabled blockchain and SDN-integrated IoT security architecture for cyber-physical systems. Adv. Control. Appl. Eng. Ind. Syst. 2024, 6, e131. [Google Scholar] [CrossRef]
  88. Shang, G.; Zhe, P.; Bin, X.; Aiqun, H.; Kui, R. FloodDefender: Protecting data and control plane resources under SDN-aimed DoS attacks. In Proceedings of the IEEE INFOCOM 2017—IEEE Conference on Computer Communications, Atlanta, GA, USA, 1–4 May 2017; pp. 1–9. [Google Scholar]
  89. Ali, M.N.; Imran, M.; din, M.S.U.; Kim, B.S. Low rate DDoS detection using weighted federated learning in SDN control plane in IoT network. Appl. Sci. 2023, 13, 1431. [Google Scholar] [CrossRef]
  90. Yazdinejad, A.; Parizi, R.M.; Dehghantanha, A.; Zhang, Q.; Choo, K.K.R. An energy-efficient SDN controller architecture for IoT networks with blockchain-based security. IEEE Trans. Serv. Comput. 2020, 13, 625–638. [Google Scholar] [CrossRef]
  91. Yazdinejad, A.; Parizi, R.M.; Dehghantanha, A.; Choo, K.K.R. P4-to-blockchain: A secure blockchain-enabled packet parser for software defined networking. Comput. Secur. 2020, 88, 101629. [Google Scholar] [CrossRef]
  92. Nazari, H.; Yazdinejad, A.; Dehghantanha, A.; Zarrinkalam, F.; Srivastava, N. P3GNN: A Privacy-Preserving Provenance Graph-Based Model for Autonomous APT Detection in Software Defined Networking. In Proceedings of the Workshop on Autonomous Cybersecurity, New York, NY, USA, 14–18 October 2023. [Google Scholar]
  93. Yang, R.; Ouyang, X.; Chen, Y.; Townend, P.; Xu, J. Intelligent Resource Scheduling at Scale: A Machine Learning Perspective. In Proceedings of the IEEE Symposium on Service-Oriented System Engineering, Bamberg, Germany, 26–29 March 2018. [Google Scholar]
  94. Ali, S.; Mostafa, G.-A. Joint computation offloading and resource provisioning for edge-cloud computing environment: A machine learning-based approach. Softw. Pract. Exp. 2020, 50, 2212–2230. [Google Scholar]
  95. Wang, N.; Zhou, R.; Jiao, L.; Zhang, R.; Li, B.; Li, Z. Preemptive Scheduling for Distributed Machine Learning Jobs in Edge-Cloud Networks. IEEE J. Sel. Areas Commun. 2022, 40, 2411–2425. [Google Scholar] [CrossRef]
  96. Ryoichi, S.; Yoshinobu, Y.; Takehiro, S. Flow control in SDN-Edge-Cloud cooperation system with machine learning. In Proceedings of the IEEE 40th International Conference on Distributed Computing Systems (ICDCS), Singapore, 29 November–1 December 2020. [Google Scholar]
  97. Rui, L.; Chen, S.; Wang, S.; Gao, Z.; Qiu, X.; Li, W.; Guo, S. SFC Orchestration Method for Edge Cloud and Central Cloud Collaboration: QoS and Energy Consumption Joint Optimization Combined With Reputation Assessment. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 2735–2748. [Google Scholar] [CrossRef]
  98. Moreno-Vozmediano, R.; Montero, R.S.; Huedo, E.; Llorente, I.M. Intelligent Resource Orchestration for 5G Edge Infrastructures. Future Internet 2024, 16, 103. [Google Scholar] [CrossRef]
Figure 1. Organization of the paper.
Figure 1. Organization of the paper.
Network 05 00016 g001
Figure 2. Layered architecture of multi-access edge cloud network architecture.
Figure 2. Layered architecture of multi-access edge cloud network architecture.
Network 05 00016 g002
Figure 3. A comparison between mobile edge cloud and fixed edge cloud.
Figure 3. A comparison between mobile edge cloud and fixed edge cloud.
Network 05 00016 g003
Figure 4. Mobile edge cloud architecture.
Figure 4. Mobile edge cloud architecture.
Network 05 00016 g004
Figure 5. Fixed edge cloud deployment architecture in an enterprise environment [19].
Figure 5. Fixed edge cloud deployment architecture in an enterprise environment [19].
Network 05 00016 g005
Figure 6. SDN-enabled edge cloud infrastructure.
Figure 6. SDN-enabled edge cloud infrastructure.
Network 05 00016 g006
Figure 7. Simplified SDN architecture.
Figure 7. Simplified SDN architecture.
Network 05 00016 g007
Figure 8. HomeCloud architecture [64].
Figure 8. HomeCloud architecture [64].
Network 05 00016 g008
Figure 9. Architecture of SDN-enabled resource allocation framework for edge cloud networks [53].
Figure 9. Architecture of SDN-enabled resource allocation framework for edge cloud networks [53].
Network 05 00016 g009
Figure 10. Distributed SDN edge cloud network architecture [50].
Figure 10. Distributed SDN edge cloud network architecture [50].
Network 05 00016 g010
Figure 11. High-level architecture of an SDN-based control system for bulk transfers [48,69].
Figure 11. High-level architecture of an SDN-based control system for bulk transfers [48,69].
Network 05 00016 g011
Figure 12. New hierarchical edge cloud SDN controller system [70].
Figure 12. New hierarchical edge cloud SDN controller system [70].
Network 05 00016 g012
Figure 13. Architecture details of Tungsten Fabric and working principle [72].
Figure 13. Architecture details of Tungsten Fabric and working principle [72].
Network 05 00016 g013
Table 1. Comparison between cloud computing and edge computing. Source: [6,7].
Table 1. Comparison between cloud computing and edge computing. Source: [6,7].
FeaturesCloud ComputingEdge Computing
ArchitectureCentralizedDistributed
LatencyHighLow
MobilityNoYes
Computational capacityHighMedium to low
SecurityLess secureMore secure
Bandwidth usagesHigh network bandwidth usesLower network bandwidth uses
ScalabilityEasy to scaleLess scalable than cloud
Data ProcessingThrough InternetNear to the source of the data
Table 2. Real-world deployment, use cases, and challenges of edge cloud applications [19,20,21].
Table 2. Real-world deployment, use cases, and challenges of edge cloud applications [19,20,21].
IndustryUse CasesLocationKiller AppsChallenges
RBwLStSaRa
RestaurantsForecast food preparation StoreVideo Analytics, Data AnalysisY Y
RetailMonitoring, tracking customers, and improving salesStoreVideo Analytics, Data AnalysisYY
Gas StationDetect safety hazards Gas StationsVideo AnalyticsY Y
CitiesTraffic administration and intelligent controlIntersections and City ClustersVideo AnalyticsYYYY Y
ConstructionIncrease safety, efficiency, and productivityConstruction SiteVideo AnalyticsYYY Y
AviationAnalyze customers’ in-flight experience, monitoring and maintenance of aircraft operationsPlaneVideo Analytics, Data AnalysisYYYYY
RailwayMonitoring freight cars, train tracks, and wheels for issues that could cause derailmentTrainVideo AnalyticsYYYYYY
Road ControlMonitoring road quality and identify areas that require maintenanceTrucksVideo AnalyticsYY Y
Self-Driving and Smart CarsRobo-taxi such as UberEdge cloudVideo AnalyticsYYY Y
Oil RefineryPredictive maintenance, workplace safetyOil Rig or PumpVideo AnalyticsYYYYY
ManufacturingImprove manufacturing yields, monitoring equipment and predicting maintance needFactoryVideo AnalyticsY Y
Manufacturing RobotsManaging a fleet of robots that assist in industy production pipelineFactoryVideo Analytics Y
AgricultureMonitoring the quality of produce during harvest, storage, and processing. Observe and monitor using drone imageryFieldVideo Analytics Y YY
Financial ServicesFacial recognition, virtual tellersBank/Financial locationsVideo Analytics, Machine ReadingY Y
PDAFacial recognition, gesture identification etcEdge cloudVideo Analytics Y
ARHolograms, recognize faces and peopleEdge cloudVideo AnalyticsY Y Y
VRCapture motionsEdge cloudVideo AnalyticsY YY
Voice SemanticsVoice AI such as AlexaEdge cloudMachine Reading YY
Smart HealthMedical devices and applications at HospitalsHospitalsVideo Analytics, Machine ReadingYYY Y
GamingGoogle’s Stadia, Microsoft’s xCloud support multiple gaming enginesEdge cloudVideo Analytics YYY Y
RoboticsRestaurants, Industry, RescueStore/IndustryVideo Analytics YY
R = Reliability, Bw = Bandwidth, L = Latency, St = Storage, Sa = Safety and Ra = Resource allocation.
Table 3. Related works on edge cloud and SDNs.
Table 3. Related works on edge cloud and SDNs.
ReferenceWork AreaKey Points
[11]Edge computing benefits from SDNLatency, load balancing, and computation resources
[46]MEC network design optimizationLatency, reliability, and resource Mobility
[47]Data offloading in MEC using SDNCentralized management, latency, and bandwidth
[48]Inter-datacenter bulk transfersBandwidth, utilization, traffic exchanged overthe Wide-Area Networks (WANs) and SDN controller
[49]SDN-enabled fog networkLatency, load balancing, hybrid SDN routing protocol combining the OLSR data forwarding, traffic engineering, and OpenFlow and SDN
[50,51]SDN load balancing with deep learning Long Short-Term Memory (LSTM), reinforcement learning, latency, load balancing, Multi-access Edge Computing and SDN controller
[52]SDN for edge cloud resources allocationNode and link provisioning, latency, and bandwidth
[53,54]Ensure high standard QoS and optimize resource allocation Latency, Bandwidth, Throughput, deep reinforcement learning (DRL)
[55]Secure and intelligent services in IoTArchitecture, Blockchain, and reinforcement learning
[56]Edge cloud collaboration architectureData-driven architecture, latency, data analytics, SDNs, and cloud manufacturing.
[57,58]Resource Provisioning in Edge Cloud ComputingMachine learning, RL, load balancing, placement of application, computation offloading, and route optimization
[59,60]SDN controller placement Machine learning, deep reinforcement learning, SDNs, network partitioning, latency, load balancing, and mobile edge cloud.
[61]Controller-based edge cloud computingReinforcement learning, task placement and improve system utility.
[62]Intrusion detection in SDN-based edge computingFederated learning (FL), Intrusion Detection, and SDN-based edge computing.
[18,63]IoT and edge cloudDeep learning, adaptive resource management, traffic management, cloud-to-edge computing, and SD-WAN.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kazi, B.U.; Islam, M.K.; Siddiqui, M.M.H.; Jaseemuddin, M. A Survey on Software Defined Network-Enabled Edge Cloud Networks: Challenges and Future Research Directions. Network 2025, 5, 16. https://doi.org/10.3390/network5020016

AMA Style

Kazi BU, Islam MK, Siddiqui MMH, Jaseemuddin M. A Survey on Software Defined Network-Enabled Edge Cloud Networks: Challenges and Future Research Directions. Network. 2025; 5(2):16. https://doi.org/10.3390/network5020016

Chicago/Turabian Style

Kazi, Baha Uddin, Md Kawsarul Islam, Muhammad Mahmudul Haque Siddiqui, and Muhammad Jaseemuddin. 2025. "A Survey on Software Defined Network-Enabled Edge Cloud Networks: Challenges and Future Research Directions" Network 5, no. 2: 16. https://doi.org/10.3390/network5020016

APA Style

Kazi, B. U., Islam, M. K., Siddiqui, M. M. H., & Jaseemuddin, M. (2025). A Survey on Software Defined Network-Enabled Edge Cloud Networks: Challenges and Future Research Directions. Network, 5(2), 16. https://doi.org/10.3390/network5020016

Article Metrics

Back to TopTop