1. Introduction
The automotive industry has been evolved in a rapid pace and has also become an important player for the future hyper-connected (IoT) Internet of Things and (IoV) Vehicles [
1]. As an icon of this industry, autonomous cars market is expected to reach USD 60 billion by 2030 [
2]. The higher level of vehicle automation [
3] implies an increased number of embedded sensors (e.g., radars, cameras) gathering and generating a massive amount of data [
4]. The connected cars will rely not only on traditional V2I (Vehicle to Infrastructure) and V2V (Vehicle to Vehicle) communication modes, but also on V2X (Vehicular to Everything) one for connecting cars and the whole Intelligent Transportation System (ITS) to pedestrian, semaphores, as well as on cloud storage and processing for HD map services and decision making to e.g., prevent accidents, maneuver, lane change, and route optimization [
5,
6,
7,
8,
9].
However, while some experiments have been carried out by players such as Waymo, Tesla, and Uber, they are not exploring the full potential of this paradigm due to the lack of holistic view of the ITS. In addition, decisions must be taken with very low latency in milliseconds with high reliability, a service categorized by International Telecommunication Union (ITU) for 5G (Fifth Generation) systems as URLLC (Ultra Reliable and Low Latency Communication) [
10,
11,
12]. Thus, it becomes clear that not only vehicles’ embedded sensors will suffice, but also networking, computing resources, overall environment knowledge and fast decision making also are of paramount importance to achieve the ultimate goal of safety and self-driving through a full autonomic approach with no human intervention on the operation of IoV [
13,
14].
Recent efforts towards turning traditional and ossified VANETs (Vehicular Ad hoc Networks) into flexible/agile ones have been initiated by the adoption of Software-Defined Networking (SDN) to aid the management and control of vehicular networking [
15,
16,
17,
18]. SDN, originally devised in the context of data centers and wired networks, provides programmability by splitting the control and data planes. This way, network elements (data plane) rely on the intelligence of the SDN controller (i.e., logic that govern packet forwarding, dropping, and other control plane functionalities) running on commodity hardware equipment with global view to program network’s elements. Upon arrival of novel flows at data plane, the SDN controller is in charge of defining rules to be installed in the network elements according to management policies implemented by the operator [
19,
20].
A first issue with regard to the IoV scalability has been addressed by adopting multiple SDN controllers to deal with the possible huge number of flow requests of vehicles [
16,
21]. These controllers are, in general, deployed on remote clouds, operator core network, or vehicular access points, which can be 802.11p/Road Side Units (RSUs), LTE eNBs (Long Term Evolution evolved Node B) or 5G gNBs, for example. However, even with the possibility of positioning the controllers at access points to achieve reduced communication delay, recent works still rely on static deployment [
22]. This approach lacks processing capabilities and depends on the distant cloud. Thus, such solutions do not integrate the needed functionalities near the user for fast decision making. In summary, IoV faces critical requirements, which should be jointly designed: the scalable and dynamic deployment of SDN controllers and reduced communication latency [
5].
Towards a flexible, scalable and dynamic deployment of SDN controllers, IoV scenarios could benefit from a highly complementary approach, namely Network Functions Virtualization (NFV) [
23,
24,
25]. NFV aims at decoupling network functions (e.g., Firewall, MME, NAT, and load balancer) from the proprietary/dedicated hardware appliances to run as virtualized services in a cloud-based platform using general purpose equipment’s (COTS) [
26]. More specifically, NFV can benefit SDN, for example, by virtualizing SDN controllers as Virtualized Network Functions (VNFs), the software implementations of network functions, executed on a centralized server pool [
27]. VNFs are provisioned and controlled through Management and Orchestration (MANO) Framework of NFV standard [
28]. As far as we know, vehicular communications literature has no proposal on the synergy between SDN and NFV in order to address dynamic provisioning of SDN controllers in an NFV-based framework for Internet of Vehicles.
With regard latency bounds of IoV, a novel paradigm has also been recently emerged to compound the 5G and beyond conceptual architecture aiming to bring storage/computing capabilities requirements to the edge of the network, namely Multi-access Edge Computing (MEC) [
29]. MEC has been standardized by European Telecommunications Standardization Institute (ETSI) since 2014 and provides cloud computing capabilities to the edge of the network through virtualized servers or micro datacenters at the eNBs, that is, within the operator’s the Radio Access Network (RAN). Thus, MEC framework permits subscriber to offload tasks, faster communication of both data and control plane traffic, improving quality of experience by reducing latency, typically required for V2X scenarios [
30,
31,
32,
33,
34]. MEC is mostly adopted for traditional IT cloud-based apps (e.g., video streaming, augmented reality), but it is still less commonly used for hosting network control plane functionalities.
To the best of our knowledge, there is no work on dynamic resource allocation of SDN controllers for the Internet of Vehicles leveraging on an NFV-based MEC infrastructure. This way, Dynamic Software Defined Vehicular Multi-access Edge Computing – DSDVMEC is propose as an autonomic extension to the ETSI MEC reference architecture along with ETSI NFV. DSDVMEC aims at being flexible and scalable through dynamically reducing the wasting of MEC resources due to previous allocations of SDN controllers that either became idle or not fully utilized by current demand, but without compromising the quality of service of ongoing flows coming from access points. In this way, DSDVMEC provides the IoV context with the ability to scale the infrastructure of controllers as VNFs according to the variability of vehicle’s flow on the roads, without compromising latency due to the proximity of SDN controllers located at the MEC. In scenarios where RSUs belong to different operators, they could present communication limitations when not using SDN. In addition, the absence of SDN controllers makes it difficult to have a global view of the network. Besides, the absence of SDN controllers makes it challenging to have a global view of the network. Due to this scenario, the highly variable demands from vehicles may degrade system management and scalability. To this end, an autonomic SDN Allocator is proposed as a component of the DSDVMEC in order to implement a heuristic taking into account: (a) scalability support, (b) resource usage optimization, and (c) management complexity of allocated controllers. In summary, the main contributions of this article are as follows:
We propose DSDVMEC, an autonomic extension to the ETSI MEC-NFV architecture environment to dynamically allocate SDN controller for the Internet of Vehicles.
An exact model to minimize the wasted resource utilization of SDN controllers in an MEC-NFV environment.
A heuristic algorithm to handle highly dense and dynamic scenarios in IoV aiming at defining: (a) The required number of SDN controllers; (b) The assignment between RSUs and controllers; and (c) The amount of resources that should be allocated to the SDN controllers.
The remainder of this article is organized as follows.
Section 2 discusses related work.
Section 3 presents the DSDVMEC architecture.
Section 4 formulate an exact model and the aSDNalloc heuristic responsible for the decision to allocate the controllers through VNFs. Numerical analysis, show the results and summarize possible directions for future research are presented in
Section 5. Finally,
Section 6 concludes this article.
Table 1 is the summary of important acronyms for this paper.
3. DSDVMEC Architecture
The ETSI began the standardization of MEC, originally named as Mobile Edge Computing, in 2014 with the aim of creating an open environment to bring processing, networking, and storage to the network edge, thus reducing the latency for applications execution issued by mobile end-users. In 2016, the Mobile term was renamed to Multi-access aiming to embrace heterogeneous networking (e.g., WiFi and Fixed accesses). Due to its physical decentralization and high incurred costs of still limited computing resources with logically centralized management (
Figure 1), there is a need to enhance the system with autonomic functionalities targeting to reduce the idle and underutilized resources, which is crucial for the wide acceptance and deployment of MEC as a supportive infrastructure for Internet of Vehicles. To this end, this article proposes the DSDVMEC (see
Figure 2), where multiple SDN controllers are dynamically deployed via VNFs in an MEC infrastructure with the goal of granting reduced delays for SDN control signaling once backhaul latency is eliminated, as well as optimizing the MEC management and control operations to efficiently serve the end-users in densely and highly demanding IoV scenarios.
DSDVMEC dimensions the allocated resources as VNFs that run the SDN controllers and their allocations depend on the amount of requests from vehicles which are periodically monitored. This way, it is proposed an autonomic module in charge of self-management of the architecture through orchestration and resource optimization regarding the VNFs. To reduce the integration complexity and overload of existing functional blocks of the ETSI MEC reference architecture with NFV support [
52], the autonomic module consists of three new functional blocks, namely, monitor, aggregator, and autonomic manager, besides four reference points, described as follows:
(1) Monitor: This block collects periodically the utilization of resources (CPU, memory, bandwidth, and storage) at Virtualization Infrastructure Manager (VIM) and the number of flows in controllers through an SDN monitoring application. (2) Aggregator: It is responsible for analyzing and summarizing the gathered data (e.g., the number of entrance flows interferes in the number of vehicles attached to eNBs and RSUs). The purposes of these actions are two-fold: correct errors during the collect phase and organize the data that serve as input to the heuristic-based decision making. (3) Autonomic Manager: It decides on the number of required controllers to meet a demand and also define which infrastructures will be assigned to the controllers as well as the computational resources to be allocated to those controllers. Upon taking the decisions, this module checks whether existing controllers could be reused. This aims to avoid unnecessary operations for creating/releasing controllers. Next, it is also verified whether further controllers to meet the demand are necessary. At the end of the creation process, the assignment of infrastructure to controllers is carried out. Thus, the unstable time of the environment, i.e, the time between the demand arrival and the controllers assignment, is minimized. As the final step, the controllers not assigned to any infrastructure are released. The interaction between the autonomic module and the MEC-NFV reference architecture (
Figure 2) occurs by means of the Monitor functional block, which gathers data from the VIM through the Am1 reference point and from the MEC applications through the Am2 reference point. In our proposal, MEC applications correspond to SDN controllers along with their applications provisioned by means of VNFs. Next, the data are sent to the Aggregator Module via Am3 reference point, that in turn, at the end of data treatment are delivered to the Autonomic Manager through Am4 reference point. The decisions taken by the Autonomic Manager with regard the creation/releasing of VNFs are transmitted through Am5 reference point to the VIM.
On the other hand, the assignment of communication infrastructures (eNBs and RSUs) to SDN Controllers is accomplished by an SDN application named Designator, which receives the decision taken by the Autonomic Module through the Am6 reference point.
The decision and assignment/deployment process can be observed in
Figure 3. It starts with the Monitor sending
Req. Reg. Flux message requesting data about the flow-rules of SDN applications via Rest API. These applications forward the requests to the corresponding controllers using the Northbound Interface, which in turn request the infrastructure services using OpenFlow as the Southbound protocol to reply by sending back a
Resp. Reg. Flux Message.
Then, the message Req. Stat. MEC is sent to the VIM indicating the request to use the computational resources of VNFs hosting the SDN controllers. The response message Resp. Stat. MEC is sent back to the Monitor.
Upon receiving the data, the Monitor forwards them to the aggregator for processing. Next, the Autonomic Manager runs the optimization heuristic to decide on resource allocation to controllers based on the data received from the Aggregator. After the heuristic execution, detailed in the next section, the message Req. Creat.VNF to the VIM with the goal of creating new controllers along with their optimized corresponding computational resources. At the end of the process, the message Resp. Creat.VNF is sent to the Autonomic Manager. In the sequel, the message Req. Sol. Assig is sent to the SDN application, Designator, which is responsible to effectuate the assignment of communication infrastructures (eNb and RSUs) to the SDN controllers. The Autonomic Module receives the message Resp. Sol. Assig. when the allocation is concluded. Finally, the Autonomic Manager solicits to the VIM the exclusion of the controllers not assigned to infrastructures via message Req. Del. NFV. By doing the exclusions at the end of whole process, the latency of flow-rules due to assignments are reduced since previous ongoing flows of such excluded controllers have already been switched to the new ones.
4. Problem Formulation
In the proposed architecture depicted in
Figure 1, the vehicular connectivity to the infrastructure is achieved through access points such as (IEEE 802.11p/DSRC) RSUs but is extensible to small cell or RRH (Remote Radio Head). For the sake of modeling, we considered that all RSUs have the same coverage and the MEC servers are located at the gNB. Each RSU is directly connected to the gNB site through a one-hop high-speed connection, forming an arrangement. The mobile core is not shown in
Figure 1 for modeling purposes, our system considers reliable RSUs connections, but our architecture allows switching the connectivity to gNB when vehicles have both interfaces. Both RSUs and gNB are openflow-enabled, whereas the SDN controllers are instantiated at the MEC server.
We assume that there is sufficient capacity in the existing controllers to meet vehicle requests and that vehicles must first request content through an access point in order to acquire services. Considering there is no flow-rule to handle the request, the RSU issues a packet-in message to an SDN controller at the MEC to acquire a flow-rule. Then, the controller responds by sending a flow-mod message to the RSU to install a new entry in its flow-table. The Monitor periodically monitors the number of requests from the RSUs, , and decides upon the assignment of RSUs to SDN controllers in order to balance the vehicular traffic/requests.
Furthermore, the RSUs will be connected to its arrangement’s gNB through Ethernet links to reduce 5G connectivity costs. Let
M be the set of RSUs; then each RSU
j is responsible for receiving requests from vehicles, while
is the number of all requests from each vehicle to the RSU
j. In this way,
is a binary variable representing whether or not the controller
i from
N is picked. The controller capacity
i is defined as
and the number of requests
from vehicles to RSUs assigned to a controller should not surpass the capacity
. Besides, a resource reservation factor of 0 ≤
ω ≤ 1 is granted to avoid overloading controllers during resizing periods.
Table 4 defines the notations used in this letter.
Figure 4 shows the flowchart of proposed model optimization approach.
The objective function defined in Equation (
1) aims to minimize the number of wasted resources; Constraint (2) ensures that the sum of the requests assigned to the controller does not exceed its capacity. Constraint (3) defines that an RSU is assigned to only one controller. Constraints (4) and (5) define the binary variables.
Heuristic
This section describes the proposed heuristic implemented by the Autonomic Manager named aSDNalloc - autonomic SDN allocator. aSDNalloc allocates dynamically computational resources to VNFs that run SDN controllers and applications. This solution is of fundamental importance to DSDVMEC since it reduces the amount of idle resources allocated to the controllers deployed in an MEC-NFV environment, which is not a resourceful data center. aSDNalloc was designed to run in shortest possible time without sacrificing the quality of the obtained solution. Since the execution time may directly impact on the response time for the infrastructure requests to the SDN controllers, aSDNalloc decision is based on three perspectives that must be taken into consideration by the heuristic, as described below:
How many controllers should be required to meet the vehicular demand without incurring in overload or underutilization of their resources?
How many computational resources (CPU, memory, bandwidth, storage) should be allocated to the SDN controllers?
Which infrastructures (eNBs and RSUs) should be assigned to the SDN controllers?
The aSDNalloc was conceived based on the bin packing problem, since there is a need to reduce the number of required bins. In the context of SDN allocation, it means that the objective function will minimize the amount of allocated VNFs to meet the demand from communication infrastructures. This way, a heuristic takes the decision on the suitable number of controllers. aSDNalloc considers four input parameters, as described below:
List of resource capacity templates: a list: represents the capacities for each template describing the computational resources created in the MEC in order to allocate SDN controllers. This list aims to provide the necessary information for the heuristic to decide about the most suitable set of resources so that controllers can adequately support each infrastructures’ demand. As an example, consider the input meaning that the template x has the capacity to meet a demand z. Thus, aSDNalloc can group a set of n infrastructures by assuming that the demand of such set must be less or equal than the capacity z.
List of requests by RSU: is a set of all RSUs, eNBs and their corresponding demands to be supported by the SDN controllers. This input is described as , where a is the infrastructure identifier and t its respective number of requests.
: Threshold for the maximum utilization of VNFs. It aims to avoid resource starvation due to excessive utilization of computational resources, which could degrade the QoS. Thus, aSDNalloc allows the definition of the amount of allocated resources that should be utilized to support a demand. For example, in a case where = 0.9, 90% of the resources can be used, while 10% will correspond to a reservation to grant QoS and amortize the demand due to the vehicle mobility among communications infrastructures.
: defines a multiplier for the infrastructure load in order to determine the minimum capacity of the controller to meet a demand. This parameter aims at reducing the number of SDN controllers, hence, diminishing the management complexity for the environment. To this end, a hypothetical example where = 10 should be interpreted in the following way: In the case of there is no already allocated controller with available resources to meet the demand of a certain infrastructure, a new controller should be created with a minimum capacity 10 times greater than the demand for that infrastructure.
The pseudo-code is shown in Algorithm 1. aSDNalloc first assigns to
the RSU with the greatest number of requests from the list
D, and to
the controller with the largest wasted capacity from list
G (lines 3–4). Then, it checks whether
has enough spare capacity for supporting the number of requests from
. If this condition is satisfied, the RSU is assigned to the controller, which is inserted in its list
G (lines 5–6). Otherwise, a new controller is created. To identify how many resources should be allocated to the new controller, the smallest resource
is removed from the templates list
(lines 7–11). Then, aiming at creating a controller with a minimum of
times the capacity required to serve
RSU,
is compared with (
·
)(line 12). The
parameter is a demand multiplier that aims to reduce the number of allocated controllers. Finally, a new controller
is created considering both the template
and the number of reserved
resources, as well as the assigned
(lines 13–14). These steps are repeated until there are no RSUs to be assigned.
Algorithm 1 Proposed Algorithm - aSDNalloc |
Input: List of requests by RSUs, D |
List of resource capacity templates, V |
Demand multiplier, |
Amount of resource reservation, |
Output: List of controllers with their resource capacity and respective assigned RSUs, G |
1: |
2: while do |
3: |
4: |
5: if then |
6: |
7: else |
8: |
9: |
10: while do |
11: |
12: if · then |
13: |
14: |
15: break |
16: end if |
17: end while |
18: end if |
19: end while |
In order to illustrate the idea behind the proposed heuristic,
Figure 5 shows a simple example using the aSDNalloc. The first step consists on removing the greatest demanding infrastructure from the list, which in turn must be allocated to a controller with an idle capacity greater or equal to this infrastructure, but respecting the constraint
. If there is no such controller that meets these requirements, then, suitable resources from the list of capacities are assigned to the VNF in charge of executing the SDN controller. It is necessary to take into account that the minimum capacity must be greater than or equal to the infrastructure demand multiplied by the value of
. The abovementioned step is repeated until there is no infrastructure to be assigned to VNFs. It is possible to observe that in the third iteration the RSU with three requests is designated to the VNF-2, which had already assigned itself the RSU with five requests and had four units of idle resources. At the end of the iteration the VNF-1 remained with two idle resources and with the same amount in reserve before the adopted
. VNF-2, on the other hand, had one resource unit idle and another unit in reserve before the value defined for
. Last, it is checked the wasted capacity for all VNFs currently executing controllers in order to identify any underutilization of controllers. To this, a comparison among utilized capacity, wasted capacity, and the list of capacities is carried out. That way, since all infrastructures have already been assigned, then it is needed to identify if there is any overprovisioned VNF. In the affirmative case, resource reallocation must be done. The quality of obtained solution is determined by the summation of controllers’ idle capacity, which consists in achieving the value 0 (zero) if the solution is optimal. Thus, the closer to zero, the better the quality of the solution provided by aSDNalloc.
5. Experimental Analysis
We conducted numerous analyses taking into account two approaches, the analytical model using solver IBM ILOG CPLEX and heuristic written in Python. Two VANETs scenarios were defined: The first employed 50, 100, and 150 RSUs. It aimed at comparing the results of analytical model with the heuristic approach. The second is a high-density scenario with 5000, 10,000, and 15,000 RSUs, which, due to the intractable time required by the analytical model, considered only the heuristic.
The first and second scenarios adopted 500 and 10,000 distinct vehicle density instances, respectively. Each instance represents a new RSUs monitoring process carried out by the
Monitor module, with each vehicle being responsible for a request, and the sum of the requests determining the RSU demand. Similarly to [
53], vehicle mobility is considered in our experiments by means of scenarios with different vehicle densities.
The main goal of this experiment is to verify the applicability of using such heuristic for real-time decision-making in high-density scenarios. For both scenarios, the parameter
assumed the values 1, 10, 50, and 100. The RSU coverage radius was set to 250 m. Considering the capacity of the controller as ∼7500 requests per second [
37], we adopt the templates capacities as 100, 500, 1000, 2000, 4000, 8000, 10,000, 12,000, and 15,000. The vehicles’ density was chosen randomly within the interval [0, 250] vehicles/km as in [
53]. For all experiments, the resource reservation parameter
is set to 10%, and the graphs plotted with a confidence interval of 95%.
5.1. Wasted Resources
The first two analyses depicted in
Figure 6 and
Table 5 aim to verify how efficient the heuristic is through comparing it with the exact model and the efficiency of heuristics in a high-density scenario, respectively.
Figure 6 illustrates that with the values of
= 50 and 100, there is an increase in wasted resources, but a reduction on the number of SDN controllers.
Adopting
= 1, the heuristic approach allocated more resources to controllers than the exact model. However, as the number of RSUs increases, the heuristics become more efficient, at worst-case less than 15% of resources wasted. In dense scenarios, the heuristic outcome improves, having analyzed in all cases less than 1% of wasted resources. It is noteworthy that all results in
Table 5 had the CI variation less than 0.02%.
5.2. Load Balancing
We considered the expression
[
54] to verify the load balancing of the proposed solution, where
,
is the biggest difference, and
the smallest one. Results vary in the range [0, 1], the closer to 0, the better the result, while the opposite, that is, the value 1, corresponds to the worst load balancing case.
Table 6 shows that the heuristic becomes more efficient regarding both resource allocation and load balancing as vehicle density increases regardless of the value adopted for the
parameter in the first scenario.
In the high-density scenario (
Table 7), the result was 0.02 in the worst-case and 0.00 in the best-case, the same behavior of the first scenario. This result shows that the proposed solutions are not only efficient in allocating resources, but also promote load balancing between allocated SDN controllers.
5.3. Management Complexity
As more controllers are required, the more complex the environment management becomes. For example, considering the scenario with 15,000 RSUs, a huge amount of flows will require synchronization among controllers, which may cause packet flooding and, consequently, the degradation of applications and network performance. Therefore, the parameter plays a crucial role in avoiding an unnecessary allocation of SDN controllers and an increase of such complexity and synchronization overhead.
It is worth mentioning that the increased number of controllers does not necessarily related to an increase in wasted resources as evidenced by the result presented by the model for 150 RSUs, where there was a reduction in the number of controllers compared to the experiment with 100 RSUs. This behavior is the result of the number of templates providing different capacities to the controllers as shown in
Figure 7.
For the high-density scenario evaluation shown in
Figure 8, the number of controllers changed significantly with the scale increase. However, the heuristic is more efficient, kept the percentage of wasted resources low, adopting templates with higher capacities, and higher values for
, consequently, the number of controllers will reduce.
5.4. Scalability
Considering the dynamic aspect of the VANET environment, it is essential to understand the trade-off between the required time to deploy the SDN controllers into the MEC and the effectiveness of the achieved solution for both the exact model and the heuristic. As shown in
Figure 9, as the scenario increases from 50 to 150 RSUs, the exact model becomes intractable, as it requires more than
ms to obtain the optimum solution. On the other hand, as shown in
Figure 10, in a scenario with 15,000 RSUs, the heuristic requires less than
ms for
= 1. In addition, as the parameter
increases to 100, the time reduces to
ms. The results show that to decrease the heuristic execution time, it’s necessary to adopt
with higher values, aiming at reducing the number of template selection tasks for the controllers. However, in the specific case of
set to 50 and 100 with 15,000 RSUs the values are similar. This fact can be explained by the number of drivers created, considering that more time is required to process the heuristic by creating new drivers than assigning RSUs to existing drivers.
5.5. Limitations and Future Research Directions
In this work, the positioning problem of SDN controllers in IoV was not addressed, the choice of which RSUs will host which controller impacts the latency between the RSUs and the controllers, consequently, the time in which the flow-in is generated by the RSUs and the flow-rule is sent by the controller back to RSU. Although the results obtained by the heuristic evaluation are promising, the study still lacks evaluations with scenarios using real vehicular traces. We intende to evaluate our proposal with such real traces in future works. When adopting architectures that employ multiple controllers, one point to be considered is the communication necessary for synchronization between these controllers. One way to deal with this problem would be with east-westbound communication present in some controllers such as ONOS and OpenDaylight. It is also worth to mentioning that the increase in the number of controllers directly impacts the volume of control traffic. All abovementioned issues will be addressed in our future works.