Next Article in Journal
Correction: Zhu et al. Energy Efficient Access Point Placement for Distributed Massive MIMO. Network 2022, 2, 288–310
Previous Article in Journal
Dynamic Framing and Power Allocation for Real-Time Wireless Networks with Variable-Length Coding: A Tandem Queue Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Collaborative Edge Computing for Vehicular Network Using Clustering Service

1
Institut de Recherche en Informatique, Mathématiques, Automatique et Signal, University of Haute Alsace, 68000 Colmar, France
2
College of Computing and Information Sciences, University of Technology and Applied Sciences, Sur 411, Oman
*
Authors to whom correspondence should be addressed.
Network 2024, 4(3), 390-403; https://doi.org/10.3390/network4030018
Submission received: 25 June 2024 / Revised: 22 August 2024 / Accepted: 30 August 2024 / Published: 6 September 2024
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)

Abstract

:
Internet of Vehicles applications are known to be critical and time-sensitive. The value proposition of edge computing comprises its lower latency, advantageous bandwidth consumption, privacy, management, efficiency of treatments, and mobility, which aim to improve vehicular and traffic services. Successful stories have been observed between IoV and edge computing to support smooth mobility and the use of local resources. However, vehicle travel, especially due to high-speed movement and intersections, can result in IoV devices losing connection and/or processing with high latency. This paper proposes a Cluster Collaboration Vehicular Edge Computing (CCVEC) framework that aims to guarantee and enhance the connectivity between vehicle sensors and the cloud by utilizing the edge computing paradigm in the middle. The objectives are achieved by utilizing the cluster management strategies deployed between cloud and edge computing servers. The framework is implemented in OpenStack cloud servers and evaluated by measuring the throughput, latency, and memory parameters in two different scenarios. The results obtained show promising indications in terms of latency (approximately 390 ms of the ideal status) and throughput (30 kB/s) values, and thus appears acceptable in terms of performance as well as memory.

1. Introduction

The success stories of cloud computing demonstrate a significant evolution in information technology, especially in terms of providing new business opportunities. Meanwhile, cloud computing has been the subject of much consideration in both research and in the industry [1,2]. As such, end-users are able to obtain on-demand network access, machine resources (vCPU, memory), applications, or the entire infrastructure. This results in our daily lives becoming widely dependent on cloud computing for storing backups, writing/editing online documents, business collaborations and sharing information, and playing games online [3,4]. However, certain applications in the field require efficient connectivity, low latency, and a fast response time, especially real-time IoT/applications such as healthcare operations and vehicle traffic [5]. This is true since the natural structure of the cloud requires that it resides in the far-end network. At this end, edge computing was introduced as a supplemental paradigm for cloud computing as the intermediate anchor between users and the cloud [6].
Edge computing is an extended thought of cloud computing by sharing most features with cloud computing [7,8]. On one hand, cloud computing hosts high processes, depth analysis, and power energy of IoT devices. On the other hand, edge computing handles important real-time data processing, such as decision-making, and fast data retrieval. At the same time, edge servers will share and update the data center periodically with the cloud [9]. In other words, edge computing focuses on local activities, small-scale, and real-time intelligence analysis. Thus, data is stored and processed regionally with less data uploading to the cloud. The general benefit is gaining high network reduction with bandwidth efficiency, response time, and low delay [10].
Edge computing is designed to be located as near to the data source of end-users as possible. This frequently results in the tasks of storing used/retrieved data and partial computing tasks being carried out in the edge computing node, which in turn reduces the intermediate data transmission process and communication costs. In the end, the cloud service is pushed to the network edge, and thus the resources required for computation and storage are moved to the proximity of the end device to achieve reduced latency and energy conservation to a large extent. One important sectors that benefits the most from the advantages of all the aforementioned edge computing features is the Internet of Vehicles (IoV). The IoV, by nature, requires high-speed, accurate, and scalable systems while dealing with contemporary traffic challenges.
In IoV technology, the vehicle is presented as a smart object supported with sensing equipment, processes, and storage, that is connected to other nodes such as those of the edge, cloud, or other vehicle(s), situations which constitute the phenomenon which is called vehicle-to-everything communication (V2X) [11]. Due to the V2X communication model, the IoV may use various technologies for connection, including wireless LAN (WLAN) networks and cellular networks such as 5G or Bluetooth. Several applications have been implemented in IoV with the support of passengers and traffic management managers, and in the urban traffic environment to provide network access for drivers [12]. Annual reports indicate millions of injuries caused by traffic accidents, and that 90 percent of these accidents are caused by human error. Leveraging the ever-increasing development of IoT and mobile networks, the Internet of Vehicles (IoV) is expected to decrease traffic accident rates, and meet the needs of traffic efficiency and traveling convenience. However, this kind of service requires ultra-low latency (for, e.g., warnings of intersection collision dangers and merging arbitration) for communications, and fast processing, which is considered a big challenge for IoV applications [13]. In addition, issues remain unresolved in the traditional network, including the need to meet various pressing demands in heterogeneous and complex vehicular scenarios. This paper proposes a collaborative mobile edge computing framework that guarantees and enhances the connectivity between vehicles (IoV sensors) and cloud servers by utilizing the mobile edge computing paradigm in the middle. The edge server aims to continuously provide stable connections to vehicles moving between regions at high speed. The proposed framework seeks to ensure that the edge server will keep track of the vehicle’s movement and associate it with the correct/near-edge server by sharing the vehicle’s vital information. The proposed framework is implemented using an OpenStack cloud server in front of three edge servers. The evaluation is conducted through three parameters which are throughput, CPU, and latency, for two different scenarios. The main objective of this research is to guarantee and enhance connectivity with an acceptable speed of communication. Aligning with the aforementioned objectives, the paper contributes to (1) designing collaborative edge servers based on regional cluster management; and (2) implement the sector of edge computing servers using OpenStack cloud.
We shall begin by presenting the background and related works of edge computing in Section 2. The proposed framework is explained in Section 3. Section 4 shows the implementation part. Then, Section 5 presents the testing methodology. The results are discussed in Section 6. The conclusion is provided in Section 7.

2. Background

2.1. Edge Computing

In contrast to cloud computing, edge computing performs computing at the edge of the network to provide all kinds of computing closer to the source of the data [2,14]. Edge computing has been introduced and defined by different scientific people from different perspectives. For example, Professor M. Satyanarayanan at Carnegie Mellon University defined edge computing as follows: “Edge computing is a new computing model that deploys computing and storage resources (such as cloudlets, micro data centers, or fog nodes, etc.) at the edge of the network closer to mobile devices or sensors” [14]. Shi et al. defined the concept of edge computing as follows: “Edge computing is a new computing mode of network edge execution. The downlink data of edge computing represents cloud service, the uplink data represents the Internet of Everything, and the edge of edge computing refers to the arbitrary computing and network resources between the data source and the path of cloud computing center.” [15,16].
China’s edge computing industry describes edge computing as follows: “Near the edge of the network or the source of the data, an open platform that integrates core capabilities such as networking, computing, storage, applications, and provides edge intelligent services nearby to meet the industry agility key requirements in connection, real-time business, data optimization, application intelligence, security, and privacy” [17]. In other words, edge computing is the process of migrating the cloud network’s partial capabilities to the edge of the network, near the IoT source. On the other hand, the IoT source devices migrate their heavy processes, fast decision making, and power to the same corresponding edge computing network. Figure 1 compares the network load between the IoV using the cloud and the IoV using the edge server.
Edge computing is utilized to enhance vehicular services by distributing computation tasks among edge servers and vehicular end devices. In this case, the application/services will be closer to the vehicular end-device users or even perform computing directly in the vehicles. There are many advantages in the field, such as higher efficiency, lower latency, and proximity services. Several approaches push cloud computing capabilities to the network edge, including fog computing, MEC, and cloudlets. Although all solutions in edge computing have the same intention and motivation, various paradigms also have their particular features. The concepts are partially overlapping but complementary as well. Table 1 illustrates a comparison between three edge computing paradigms.

2.2. Vehicular Edge Computing: An Overview

Vehicular edge computing (VEC) represents a great breakthrough emerging in the field of MEC vehicular networks. VEC aims to migrate vital operations such as computing, communication, and caching resources to be in the proximity of vehicular end devices. Thus, VEC plays an important role in addressing the ever-increasing demands of edge devices with low delay, a fast response time, and high bandwidth. In contrast to traditional MEC, the main contribution of VEC is to provide smooth mobility for vehicles accounting for dynamic topological changes and complicated communication factors due to the rapidly varying channel environment over time [18].
In general, vehicles handle certain operations through local computation, communication, and storage resources. Edge servers are often placed close to vehicles at roadside units (RSUs) for gathering, processing, and updating data. Due to limited storage, vehicles offload heavy computation and latency-sensitive tasks to edge servers, which can effectively decrease the response time and efficiently alleviate the heavy burden on backhaul networks [19,20]. The VEC framework can instantly provide the requested content from caching nodes without connecting to the core network, which enhances end-to-end latency and supports efficient network bandwidth [21,22]. Due to constrained capacity, there is still a level of limitation to cache all the contents. Figure 2 shows the architecture of vehicular edge computing. The vehicular edge computing architecture is illustrated in the following points and in Figure 2.
  • Vehicular terminals: This term refers to any vehicle-connected device rather than ordinary mobile nodes.
  • Edge servers: In VEC, RSUs often act as edge servers, which are geographically distributed along the sides of roads. In this research, this will be the edge server device that holds all the design and implementation of the claimed idea.
  • Cloud servers: Cloud points usually reside at the very far edge of the network. Cloud servers periodically obtain the uploaded information from edge servers.
Among mobile edge computing (MEC), fog computing (FC), and cloudlet computing (CC), fog computing is more involved in the computing layer which leverages devices such as M2M wireless routers. The main aim of fog computing nodes (FCNs) is to compute and store data from end local devices prior to forwarding these to the cloud. The other term, MEC, is presented as intermediate nodes with processing and storage functionalities located at the base stations of cellular networks or any edge server in the network. Cloudlets is implemented on dedicated devices with the capability of acting as a lower-scale data center located nearby to consumers. The latter technique allows end-devices to offload computing to cloudlet devices and gain in terms of resource provisioning, just like a data center would [5].

3. Related Work

Cloud computing technologies have been studied and analyzed in several fields. Vehicular edge computing has been evaluated as a network traffic system which is widely involved in this domain. Meanwhile, several research activities have been conducted with ideas, designs, and implementation.
The authors in [23] divided the area on the road into several “lane section IDs” and assigned it to the corresponding edge server. The said strategy was implemented over a dynamic map system. Then, all vehicles are connected with edge servers by linking multiple edge servers. The main achievement in this work is that of the reduction in the unstable radio traffic signals which aims to aggregate vehicle data with no effect. The evaluation has been focused on the effectiveness of the load balancing and the scalability of the dynamic map system in multiple edge servers.
The authors in [18] proposed a novel method to achieve high levels of collaboration among different edge computing anchors by introducing a novel collaborative VEC framework, called CVEC. In particular, CVEC aims to enhance the scalability of vehicular services and applications in two directions, with horizontal and vertical collaborations. In addition, the authors discussed the architecture and all its possible cases with all technical enablers to support the CVEC. The noticeable results collected from this work are the low execution delay, acceptable times for delivering and offloading to the cloud, and time computation at the cloud.
In [24], the authors proposed a mobile healthcare framework utilizing an edge–fog–cloud collaborative network. Health monitoring was parameterized by using edge, fog, and cloud at the far side for further data analysis such as abnormal status. The difference in the patient position during transportation is critical and fatal in emergency cases. Due to low data delivery in health-related applications and connection interruption, the authors encouraged that the mobility information of the end-user be considered along with a pattern detection scheme inside the cloud. This supports users with nearby health centers. The collected results obtained by theoretical analysis can be observed to demonstrate that the proposed framework minimizes the energy consumption and delay of IoT devices. Other interesting results from the experimental analysis of the proposed module performed with recall value, high precision, and time-efficiency are compared to those of existing models.
The work in [25] aimed to propose an efficient IoT architecture, mobility-aware scheduling, and allocation protocols for healthcare. The authors proposed a method to support the smooth mobility of patients through an adaptive received signal strength (RSS) by utilizing the hand-off technique. The proposed architecture dynamically ensures the distribution of healthcare tasks among fog/cloud servers. The authors implemented the idea by using a mobility-aware heuristic-based scheduling and allocation approach (MobMBAR). In particular, the MobMBAR aimed to balance the task execution distribution based on the patients’ mobility and the spatial residual of sensed data. The results found from this work claim to reduce the total scheduling time by accounting for most task features including the critical level of individual tasks and the higher response time during the reallocation phase. This work has validated the performance by comparing it with existing solutions using simulations. The authors in [11] reviewed and addressed the gap between IoV and IoFV by presenting an in-depth analysis of the key differences between them. The major focus of this paper is an examination of the technological components, communication protocols, network infrastructure, data management, applications, objectives, challenges, and future open issues and trends related to both IoV and IoFV. The paper engages in a sufficient analysis of machine learning, artificial intelligence, and blockchain. This work contributes to a deeper understanding of the implications and potential technologies, in the context of transportation systems.
The paper in [12] presented an overview of the autonomous vehicle concept by discussing the major technologies in the 5G IoV environment, and the authors also presented the security perspectives by showing several susceptible attacks in this environment, along with several practices for meeting the needs of autonomous vehicle security. In particular, the authors reviewed major cyberattacks concerning information and communication security, and some solutions for addressing these cyberattacks. Table 2 presents a brief summary of the related works.

4. The CCVEC Framework

This section illustrates the design of the framework by presenting the general steps and components involved in this research to achieve the desired objectives. The main aim of the Cluster Collaboration Vehicular Edge Computing (CCVEC) framework is to provide efficient vehicle mobility that moves quickly from one region to another under a cluster of edge computing servers. Edge servers cooperate to smoothen a vehicle’s mobility by initially sharing/exchanging its information. The CCVEC framework implements cluster management to ease the sharing of information among edge computing servers [20,21], which in turn ensures high connectivity and low latency for vehicle device data streaming, especially with high-speed mobility and intersections [26]. The framework steps are explained in detail in the following:
  • Initially, all edge servers (cluster nodes) present themselves with the cloud point (cluster head). The cluster head populates the geolocation topology information of each edge node. Also, the cluster head updates the clusters’ information when any server announces that they are joining/leaving the range;
  • Once the vehicle is connected by the edge server, this server (current) shares the vehicle’s information with the next potential server. At the same time, the previous server keeps the same information active to keep track of vehicle paths;
  • Next, the edge server notifies the cloud of the new vehicle’s position for the calculation of the next position;
  • When the vehicle moves to the next server, repeat Step 2.
From the steps above, the framework ensures that the vehicles’ information will be pre-populated and pre-processed in the next station edge server prior to its arrival. This avoids extra connection setup and at the same time reduces unnecessary data connections with the cloud. Figure 3 illustrates the major components involved in this framework.

5. Implementation

The framework has subjected to real-time implementation with three real servers to meet the proof of concept of the claimed idea of this work. The CCVEC framework was designed and implemented using OpenStack [27] as a cloud and edge computing logic. OpenStack is an open source cloud operating system, which was found to manage and control the entire computer infrastructure, such as storage, resources, and network. Further benefits stem from the fact that OpenStack also provides services and platforms for end-users or enterprises. In this research, the Senlin clustering service is utilized and well configured in order to create and manage the cluster nodes with high orchestration. Initially, the cluster profile was created at the basic level for all cluster nodes using $openstack cluster profile create myserverProf. Next, using Senlin, the cluster space is created using the predefined profile and name by using $ senlin cluster-create-p myserverProf clusterName. Later, we put the required size for the cluster to three nodes using $ senlin cluster-resize–capacity 3 clusterName. It is worth mentioning here that all the cluster nodes and the cloud are functioning with a basic profile to avoid the complexity and trade-off performance. In other words, we have kept all the settings in default just to obtain the proof of concept of our claim.
This work makes use of the receiver service in OpenStack which aims to capture events of vehicles connecting to a cluster node. In this work, the API was used in order to share vehicles’ information (e.g., vehicle_ID) between the current and the next node. Figure 4 shows the passing message between network components.

6. Evaluation Methodology

This section explains the methodology testing of the proposed framework by setting up the environment, the scenario specifications, and the testing parameters to obtain the desired results.

6.1. Testing Environment

The testing platform of the proposed framework is described in this section by presenting the testing methodology, traffic load, and evaluation parameters. All the devices (cloud, edges, and vehicles) involved in this test are connected to the local area (wired). It is worth mentioning earlier that the evaluation does not concern the native clouds’ functionalities such as creating/deleting VMs, reading/writing on storage, and controlling VMs, as it sends a normal string and the destination will sink. However, the evaluation of this work is conducted to examine the connection flow between cluster nodes in order to make a proof-of-concept and validate the claimed idea of this work.
The test-bed environment in this research is conducted by utilizing the VMTP [28] measurement tool which is a data path performance evaluator specifically built for OpenStack clouds.

6.2. Testing Scenarios

Scenario 1: The default configuration file uses “admin-openrc.sh” as the rc file which generates six standard sets of performance data. The configuration file has been modified to collect one case for scenario 1. Scenario 1 aims to collect the results regarding the case of east–west flows between VM and VM with the same network, a private fixed IP, and flow1 which evaluates the connection between the edge and cloud node as they can be fixed to an IP. The network traffic was collected with ICMP and TCP protocol measurements. The configuration file and protocol selection have been selected by the command “python vmtp.py-r admin-openrc.sh-p admin –protocols IT”, as can be seen in Figure 5a. In this scenario, MongoDB was used to store the related information related to the scenarios’ configurations by storing “client_db” with the collection name “pns_web_entry”. The database is reached by the fields vmtp_mongod_port, vmtp_db, vmtp_collection. The time period for this test is set to 120 s (instead of the default 10 s) for the throughput measurement between the edge and cloud server using the line “python vmtp.py–host [email protected]–host [email protected]–time 120”.
Scenario 2: In scenario 2, the confirmation file has been edited to generate one case out of six standard sets of performance data. Scenario 2 was conducted to obtain results for the case of east–west flows between VM and VM in different networks with a fixed IP and flow2. This path evaluates the connection between the moving vehicle and the edge node.
The protocols ICMP and TCP have been utilized to measure the connection between the moving vehicle and the edge server using the command “python vmtp2.py-r admin-openrc.sh-p admin–protocols IT”, as can be seen in Figure 5b. The time period for each test is set to 120 s to obtain an accurate throughput measurement between vehicles and the edge server using the line “python vmtp.py–host [email protected]–host [email protected]–time 120”.
It is important to notice that the vehicle device transmits any kind of data towards the edge computing node, regardless of which application has been utilized. Scenario 2 is invoked once the vehicle moves from one edge server to the next server, which occurs manually, as the vehicle mobility is out of this research scope.

6.3. Evaluation Parameters

The proposed mechanism is tested based on three different parameters which are as follows: Round trip time (rtt) in ms to measure the latency, calculated by Equation (1); Throughput (tp) in kB/s, calculated by Equation (2); and Memory usage (RAM), which examines the storage footprint of each instance of data retrieval, calculated in Equation (3). The test was loaded with 500 concurrent connections as a desired load.
A v g L a t e n c y = R T T s 1 + R T T s 2 2 + R T T c 1 + R T T c 2 2
T H = I T + R T T c 1 + R T T c 2 2
M e m = 100 ( m f + b + c ) × 100 m v
where RTT is the round trip time in s; s is the server, and c is client. In throughput, I is inventory, and T is time. mf = memory free, b = buffer, c = memory cache, and mv = max value.

7. Results

This section discusses the most interesting and meaningful results of this research. The results have been obtained from two different scenarios which are the two legs of the framework connections. Scenario 1 tests the connection between the vehicle device and the edge computing server. Scenario 2 tests the connection between the edge and cloud computing servers in terms of cluster management.
Scenario 1: We can obviously notice from Figure 6 that, with the latency calculated with the ideal status, the vehicle keeps transferring data towards the edge server, and the test transfers a simple string which only takes 120 ms of latency, which increased dramatically by around 15%. We expected to see stability due to the lack of operations which were involved in the ideal session. Thus, we can clearly notice the difference registered by the proposed CCVEC framework when spending 163 ms for 100 loads and the latency increase while the load increases to spend 283 and 373 ms in 200 and 300 loads, respectively. The latency is attributed to the fact that the edge server is occupied with other connections with the cloud and cluster nodes in the neighbor. Other interesting findings include the fact that the vehicle encounters extra latency when it moves to a new server but it benefits from the pre-processed information that has already been shared from the previous server.
Figure 7 presents interesting findings by measuring the throughput of the proposed mechanism to conclude that, during the ideal session, the edge server receives sufficient throughput from the vehicle device, which is 57.1 kB/s in 100 loads, which is considered to be a high value. That is because the edge server in this case operates normally with other cluster nodes. However, when the patient moves to the next edge server, the CCVEC framework is invoked and the throughput decreases to approximately 50% from the ideal status. That is because the edge server is occupied with the cloud and the next/previous server.
Figure 8 shows the impact of the CCVEC framework on edge servers when it performs read operations from the data center (cloud). Obviously, we can notice that the local database (edge) consumes less memory when it connects to the far-end cloud server, which is due to the high integrity and ideal operations, i.e., it consumes 5.9% of memory. However, once the CCVEC framework is invoked, it starts to read/write the vehicle’s information such as vehicle_ID and the high-resolution map, and the memory increases accordingly to utilize 6.9% of the memory while a load of 400 is applied.
Scenario 2: The scenario presents an interesting part of the results because it measures the vital activities between the edge and the cloud server.
Figure 9 describes the time taken to handle cluster nodes when the vehicle transfers to a new server. It spends 147 ms during 100 loads which is higher than the ideal case by 91 ms. This variance is accepted since it is less than 1 s. We attribute this to the connection with the cloud, next edge server, and some database retrieval. In addition, it is worth mentioning that, when the load is 500, the CCVEC spends 501 ms, which becomes a higher difference from the ideal case because of the intersection session between the two clusters neighbor. The intersection might occur with the previous server as well.
Figure 10 presents the throughput of the proposed CCVEC framework while communicating with the cloud and neighbor servers. In the ideal case, there is low traffic sent towards the cloud from the edge server and it shows 26.2 kB/s in 100 loads, which is due to the fact that no operations were occupied at this time. However, this amount of data (26.2 kB/s) is attributed to the other operations that regularly occurred in the system, such as new events in the cluster change/update policy, or up/down of the cluster. The load 500 shows a promising indication by presenting a 58 kB/s difference from the ideal status. This varies in the cost of all the activities created by the cluster such as update, add, remove, etc. Moreover, the test is conducted in the same network and fixed IP to avoid other network obstacles.
Memory consumption is calculated when the vehicle starts to read/write from the near-edge server. The vehicle updates its track, load map, send/receive ID, and so on. For the overall load, the proposed framework utilized the highest memory (6.8%) footprint, but was still accepted even when a load of 500 is applied as indicated in Figure 11.

8. Conclusions

By the seamless merging of MEC and vehicular networks, VEC extends the functionalities of the cloud toward the edge of networks, allocating resources in close proximity to vehicles. Several IoV applications are connected to the edge of their network to fulfill the increasing requirements in terms of computation and storage resources. Technically, the collected data from these vehicles have to be transmitted somewhere else to be processed, such as in the cloud. From the earlier designs, the cloud was considered to be too far from the vehicles. In addition, traffic services/applications require very fast and accurate real-time data processing near the vehicles. Edge computing is a solution for vehicles that supports high resource and data processing closer to the vehicle’s device. However, during fast mobility, vehicles encounter slow connection while traveling from one edge server realm to another. This research proposed a framework that guarantees and enhances the connection between the vehicle and edge computing servers. The edge server performs the role of continuously supporting a stable connection for traveling vehicles. The proposed CCVEC framework is implemented and evaluated using OpenStack cloud technology besides three cluster servers. Results have been measured with the throughput, latency, and memory of the entire framework in two different scenarios. We conclude that the proposed framework is performed with acceptable results by spending 163 ms with 100 loads, and the latency increases while the load increases to spend 283 and 373 ms in 200 and 300 loads, respectively. These results are still valid and the values remain within acceptable latency percentages. Also, scenario 2 reflects the same perspective with 500 loads applied, where the throughput is only 60 kB/s from the ideal status. We conclude that the general finding of this research is to facilitate and smoothen the transmission of the vehicle from one realm to the next. Also, vehicle applications are supported with wise decisions in terms of the connection with edge selection. Future works plan to extend the idea by involving new collaborations between vehicles by interchanging vital information on the go which will boost the level of the latency profile and be beneficial from a security perspective while sharing sensitive traffic data.

Author Contributions

A.A.-A. and P.L.; methodology, A.A.-A., P.L. and A.M.; validation, A.A.-A. and P.L.; formal analysis, A.A.-A., P.L. and A.M.; investigation, A.A.-A. and P.L.; data curation, A.A.-A. and P.L.; writing—original draft preparation, A.A.-A. and P.L.; writing—review and editing, A.A.-A., P.L. and A.M.; supervision, P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CCCloudlet Computing
CCVECCluster Collaboration Vehicular Edge Computing
CPUCentral Process Unite
CVECCollaborative Vehicular Edge Computing
FCFog Computing
FCNFog Computing Nodes
ICMPInternet Control Message Protocol
IDIdentification
IoTInternet of Things
IoVInternet of Vehicles
IoFVInternet of Flying Vehicles
IPInternet Protocol
MECMobile Edge Computing
RAMRandom-Access Memory
RSSReceived Signal Strength
RSURoadside Unite
RTTRound Trip Time
TCPTransmission Control Protocol
TPThroughput
V2XVehicle-to-Everything
vCPUVirtual Central Process Unite
VECVehicular Edge Computing
VMVirtual Machine
VMTP  Virtual Machine Throughput Performance

References

  1. De Donno, M.; Tange, K.; Dragoni, N. Foundations and evolution of modern computing paradigms: Cloud, iot, edge, and fog. IEEE Access 2019, 7, 150936–150948. [Google Scholar] [CrossRef]
  2. Cao, K.; Liu, Y.; Meng, G.; Sun, Q. An overview on edge computing research. IEEE Access 2020, 8, 85714–85728. [Google Scholar] [CrossRef]
  3. Wang, B.; Wang, C.; Huang, W.; Song, Y.; Qin, X. A survey and taxonomy on task offloading for edge-cloud computing. IEEE Access 2020, 8, 186080–186101. [Google Scholar] [CrossRef]
  4. Kolhar, M.; Abu-Alhaj, M.M.; Abd El-atty, S.M. Cloud data auditing techniques with a focus on privacy and security. IEEE Secur. Privacy 2017, 15, 42–51. [Google Scholar] [CrossRef]
  5. Ahvar, E.; Orgerie, A.-C.; Lebre, A. Estimating energy consumption of cloud, fog, and edge computing infrastructures. IEEE Trans. Sustain. Comput. 2022, 7, 277–288. [Google Scholar] [CrossRef]
  6. Ren, J.; Yu, G.; He, Y.; Li, G.Y. Collaborative cloud and edge computing for latency minimization. IEEE Trans. Veh. Technol. 2019, 68, 5031–5044. [Google Scholar] [CrossRef]
  7. Nayyer, M.Z.; Raza, I.; Hussain, S.A.; Jamal, M.H.; Gillani, Z.; Hur, S.; Ashraf, I. LBRO: Load balancing for resource optimization in edge computing. IEEE Access 2022, 10, 97439–97449. [Google Scholar] [CrossRef]
  8. Du, J.; Zhang, G.; Yuan, X.; Zang, X. P2SPA: Privacy preservation strategy with pseudo-addresses for edge computing networks. IEEE Access 2024, 12, 40962–40972. [Google Scholar] [CrossRef]
  9. Zhao, P.; Yang, Z.; Zhang, G. Personalized and differential privacy-aware video stream offloading in mobile edge computing. IEEE Trans. Cloud Comput. 2024, 12, 347–358. [Google Scholar] [CrossRef]
  10. Alanhdi, A.; Toka, L. A survey on integrating edge computing with ai and blockchain in maritime domain, aerial systems, iot, and industry 4.0. IEEE Access 2024, 12, 28684–28709. [Google Scholar] [CrossRef]
  11. Herich, D.; Vaščák, J. The Evolution of Intelligent Transportation Systems: Analyzing the Differences and Similarities between IoV and IoFV. Drones 2024, 8, 34. [Google Scholar] [CrossRef]
  12. Saber, O.; Mazri, T. Security of Autonomous Vehicles: 5g Iov (internet of Vehicles) Environment. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Copernicus GmbH: Göttingen, Germany, 2022; Volume XLVIII-4/W3-2022, pp. 157–163. [Google Scholar]
  13. Liu, L.; Chen, C.; Pei, Q.; Maharjan, S.; Zhang, Y. Vehicular edge computing and networking: A survey. Mob. Netw. Appl. 2021, 26, 1145–1168. [Google Scholar] [CrossRef]
  14. Satyanarayanan, M. The emergence of edge computing. Computer 2017, 50, 30–39. [Google Scholar] [CrossRef]
  15. Shi, W.S.; Zhang, X.Z.; Wang, Y.F. Edge computing: State-of-the-art and future directions. Comput. Res. Dev. 2019, 56, 69–89. [Google Scholar]
  16. Shi, W.; Sun, H.; Cao, J.; Zhang, Q.; Liu, W. Edge computing-an emerging computing model for the Internet of everything era. J. Comput. Res. Dev. 2017, 54, 907–924. [Google Scholar]
  17. Hong, X.; Wang, Y. Edge computing technology: Development and countermeasures. Strateg. Study Chin. Acad. Eng. 2018, 20, 20–26. [Google Scholar] [CrossRef]
  18. Wang, K.; Hao, Y.; Wei, Q.; Geyong, M. Enabling collaborative edge computing for software defined vehicular networks. IEEE Netw. 2018, 5, 112–117. [Google Scholar] [CrossRef]
  19. Zhao, Y.; Wei, W.; Li, Y.; Carlos, C.; Massimo, T.; Jie, Z. Edge computing and networking: A survey on infrastructures and applications. IEEE Access 2019, 7, 101213–101230. [Google Scholar] [CrossRef]
  20. Al-Allawee, A.; Mihoubi, M.; Lorenz, P.; Abakar, K.S. Efficient dispatcher mechanism for sip cluster based on memory utilization. In Proceedings of the ICC 2023—IEEE International Conference on Communications, Rome, Italy, 28 May–1 June 2023. [Google Scholar]
  21. Al-Allawee, A.; Lorenz, P.; Abouaissa, A.; Abualhaj, M. A performance evaluation of in-memory databases operations in session initiation protocol. Network 2023, 3, 1–14. [Google Scholar] [CrossRef]
  22. Dong, C.; Zhou, J.; An, Q.; Jiang, F.; Chen, S.; Pan, L.; Liu, X. Optimizing Performance in Federated Person Re-Identification through Benchmark Evaluation for Blockchain-Integrated Smart UAV Delivery Systems. Drones 2023, 7, 413. [Google Scholar] [CrossRef]
  23. Hosono, K.; Maki, A.; Watanabe, Y.; Takada, H.; Sato, K. Implementation and evaluation of load balancing mechanism with multiple edge server cooperation for dynamic map system. IEEE Trans. Intell. Transp. Syst. 2022, 23, 7270–7280. [Google Scholar] [CrossRef]
  24. Anwesha, M.; Shreya, G.; Aabhas, B.; Soumya, K.; Rajkumar, B. Internet of health things (ioht) for personalized health care using integrated edge-fog-cloud network. J. Ambient Intell. Hum. Comput. 2021, 12, 943–959. [Google Scholar]
  25. Abdelmoneem, R.M.; Abderrahim, B.; Eman, S. Mobility-aware task scheduling in cloud-fog iot-based healthcare architectures. Comput. Netw. 2020, 179, 107348. [Google Scholar] [CrossRef]
  26. Abualhaj, M.; Al-Zyoud, M.; Hiari, M.; Alrabanah, Y.; Anbar, M.; Amer, A.; Al-Allawee, A. A fine-tuning of decision tree classifier for ransomware detection based on memory data. Int. J. Data Netw. Sci. 2024, 8, 733–742. [Google Scholar] [CrossRef]
  27. Cody, B. OpenStack in Action; Manning: New York, NY, USA, 2016. [Google Scholar]
  28. VMTP Is a Data Path Performance Measurement Tool for OpenStack Clouds. Available online: https://vmtp.readthedocs.io/en/latest/ (accessed on 1 March 2024).
Figure 1. Edge vs. cloud server.
Figure 1. Edge vs. cloud server.
Network 04 00018 g001
Figure 2. Architecture of vehicular edge computing.
Figure 2. Architecture of vehicular edge computing.
Network 04 00018 g002
Figure 3. CCVEC framework components.
Figure 3. CCVEC framework components.
Network 04 00018 g003
Figure 4. Passing messages.
Figure 4. Passing messages.
Network 04 00018 g004
Figure 5. Testing scenario: (a) VM to VM with the same network; (b) VM to VM with a different network.
Figure 5. Testing scenario: (a) VM to VM with the same network; (b) VM to VM with a different network.
Network 04 00018 g005
Figure 6. Round trip time in ms (latency) for scenario 1.
Figure 6. Round trip time in ms (latency) for scenario 1.
Network 04 00018 g006
Figure 7. Throughput (kB/s) in scenario 1.
Figure 7. Throughput (kB/s) in scenario 1.
Network 04 00018 g007
Figure 8. Memory usage (%) in scenario 1.
Figure 8. Memory usage (%) in scenario 1.
Network 04 00018 g008
Figure 9. Round trip time in ms (latency) for scenario 2.
Figure 9. Round trip time in ms (latency) for scenario 2.
Network 04 00018 g009
Figure 10. Throughput (kB/s) in scenario 2.
Figure 10. Throughput (kB/s) in scenario 2.
Network 04 00018 g010
Figure 11. Memory usage (%) in scenario 2.
Figure 11. Memory usage (%) in scenario 2.
Network 04 00018 g011
Table 1. Comparison of different edge computing paradigms.
Table 1. Comparison of different edge computing paradigms.
ItemMECFog ComputingCloudlet
Node typesServers in base stationsRouters, access points, gatewaysSmall data center
Deployment locationNetwork edgeNetwork edgeRemote network
Context awarenessHighMediumLow
ProximityOne or multiple hopsOne hopOne hop
ScopeMediumSmallSmall
CooperationNoPartiallyNo
Supporting mobilityYesYesYes
Business interest5G requirementsInternet of ThingsMobile applications
ScalabilityHighHighHigh
Software architectureMobile orchestratorFog abstractionCloudlet agent
Table 2. Related works summary.
Table 2. Related works summary.
ReferenceYearIoT TypeAlgorithm
Ref. [26]2022Vehicle-dynamic traffic mapDynamic map system
Ref. [20]2018Vehicular servicesCVEC
Ref. [27]2021Healthcareedge-fog-cloud
Ref. [28]2020HealthcareMobMBAR
Ref. [13]2024IoV and IoFVReview
Ref. [14]20225G IoVReview
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al-Allawee, A.; Lorenz, P.; Munther, A. Efficient Collaborative Edge Computing for Vehicular Network Using Clustering Service. Network 2024, 4, 390-403. https://doi.org/10.3390/network4030018

AMA Style

Al-Allawee A, Lorenz P, Munther A. Efficient Collaborative Edge Computing for Vehicular Network Using Clustering Service. Network. 2024; 4(3):390-403. https://doi.org/10.3390/network4030018

Chicago/Turabian Style

Al-Allawee, Ali, Pascal Lorenz, and Alhamza Munther. 2024. "Efficient Collaborative Edge Computing for Vehicular Network Using Clustering Service" Network 4, no. 3: 390-403. https://doi.org/10.3390/network4030018

APA Style

Al-Allawee, A., Lorenz, P., & Munther, A. (2024). Efficient Collaborative Edge Computing for Vehicular Network Using Clustering Service. Network, 4(3), 390-403. https://doi.org/10.3390/network4030018

Article Metrics

Back to TopTop