Efﬁcient Multi-Player Computation Ofﬂoading for VR Edge-Cloud Computing Systems

: Virtual reality (VR) is considered to be one of the main use cases of the ﬁfth-generation cellular system (5G). In addition, it has been categorized as one of the ultra-low latency applications in which VR applications require an end-to-end latency of 5 ms. However, the limited battery capacity and computing resources of mobile devices restrict the execution of VR applications on these devices. As a result, mobile edge-cloud computing is considered as a new paradigm to mitigate resource limitations of these devices through computation ofﬂoading process with low latency. To this end, this paper introduces an efﬁcient multi-player with multi-task computation ofﬂoading model with guaranteed performance in network latency and energy consumption for VR applications based on mobile edge-cloud computing. In addition, this model has been formulated as an integer optimization problem whose objective is to minimize the sum cost of the entire system in terms of network latency and energy consumption. Afterwards, a low-complexity algorithm has been designed which provides comprehensive processes for deriving the optimal computation ofﬂoading decision in an efﬁcient manner. Furthermore, we provide a prototype and real implementation for the proposed system using OpenAirInterface software. Finally, simulations have been conducted to validate our proposed model and prove that the network latency and energy consumption can be reduced by up to 26.2%, 27.2% and 10.9%, 12.2% in comparison with edge and cloud execution, respectively.


Introduction
Virtual reality (VR) is considered as one type of the most widely-announced applications of the fifth-generation cellular system (5G) [1][2][3]. In addition, many applications and market sectors are expected to be introduced in many areas of life [4]. Furthermore, augmented reality (AR) is considered as a new technology that enables the augmentation of real objects in the surrounding environments such as perceptual information extracted from multiple sensory modes containing haptic, visual and auditory sensors [5,6]. In addition to that, the recent developments and advances in sensory

•
Considering a multi-level environment for multi-player computation offloading especially in the case of a large number of mobile devices and the resources of edge server are not enough is an important issue. • In the MEC system, most of the complex mobile applications have many tasks that need to be offloaded and executed. Therefore, addressing the multi-task issue is important.

•
Computing resources on the edge and cloud server and computing tasks of the mobile devices are considered to be the main factors that play an important role in the efficiency of multi-level, multi-player, multi-task edge-cloud computing system. It is, therefore, crucial to have an effective policy that is joint between them.
Motivated by such considerations, in this paper, we introduced a reliable AR system able to support ultra-low latency AR applications with the announced specifications. This system has deployed a multi-level, multi-player with multi-task computation offloading environment to provide the computing resources at the edge of the RAN which can reduce the communication latency of the VR tasks. In addition, we formulated the computation offloading as an integer optimization problem whose objective is to minimize the sum cost entire system in terms of network latency and energy consumption. The main contributions of this paper include: • An efficient computation offloading model is formulated as an integer optimization problem with the objective of minimizing the sum cost of an entire system in terms of network latency and energy consumption for a multi-level multi-player multi-task edge-cloud computing systems.
In addition, our environment has considered a single cloud computing which connected with the edge computing server via an intelligent core network that is built based on SDN technology to provide with more resources when the number of VR devices increases and the resources of edge server becomes not enough.

•
An efficient algorithm has been designed which provides comprehensive processes for deriving the optimal computation offloading decision.

•
Three main VR applications have been considered which are multiple player VR games, Holograms and 360-degree Rv video applications. • Finally, simulations have been conducted to validate our proposed model and prove that the network latency and energy consumption can be reduced by up to 26.2%, 27.2% and 10.9%, 12.2% in comparison with edge and cloud execution, respectively. In addition, we provide a prototype and real implementation for the proposed system using OpenAirInterface software.
The rest of the paper is organized as follows. Section 2 introduces three main VR applications. Section 3 reviews related work on computation offloading policies. Section 4 presents our system model for multi-level multi-player with multi-task computation offloading and the designed algorithm. Simulation experiments and prototype implementation are conducted in Section 5. Finally, Section 6 concludes the paper.

Holograms over Proposed System
To increase the application effectiveness, augmented reality and virtual reality are increasingly used in conjunction with other technologies, for example, with the Internet of things applications, Tactile Internet and holographic telepresence [23][24][25][26]. The holographic presence makes AR/VR more spectacular for the user and allows to see virtual holograms, which are volumetric color images. Modern equipment allows you to create practical holograms not different from real things. Including such an effect is achieved due to accurate tracking of the user's position in a given space and depending on his location playing him a stereoscopic image. AR/VR technologies allow us to project static or animated objects into real environments, thereby expanding the physical world. Earlier designs of holograms for AR are based on so-called displays in the air, sometimes also called displays in free space. Projected graphic objects are displayed in the air on free projection surfaces, such as a poorly visible fog wall or fog screen created by an installed fan [27]. One of the most popular is the Hololens equipment from Microsoft [28]. Since both Microsoft HoloLens and AR glasses are capable of tracking head movements, they create the impression of a constant presence of holographic geospatial objects in the user's environment. Even if the user walks in a certain area, usually indoors, the holograms remain and adapt to the user's location and viewing perspective. This constant and adaptable holographic projection can lead to visualization approaches that bring additional benefits for cognitive processing. Presented as the first stand-alone holographic computer, Hololens unites the physical and digital worlds, allows users to interact with digital content and interact with holograms in mixed reality. The work [29] is devoted to a technical assessment of the use of Hololens for multimedia applications.

Multiple Players VR Games over Proposed System
For better illustration of the operation of the cloudlet and other higher layer edge-cloud units, an example of multi-player VR games is considered. In multiple players VR games, players use their VR supported devices to play an intended game, however these games run over the remote application server. In order to reduce the latency and efficient use of energy and computing resources of users' VR devices, the proposed system is introduced. Figure 1 presents the multi-player VR game system runs over the proposed system. The VR user with limited computing or energy resources searches, the surrounding, for devices with available computing and energy resources capable of hosting computing tasks associated with the VR game user. These devices are referred to as cloudlet and it may be represented by a powerful smartphone, notebook or tablet. Computing tasks are offloaded from the VR user to the cloudlet over D2D communication interface. WiFi Direct represents an efficient interface and it can be deployed with the method proposed in [6]. The offloading process is held based on the developed algorithm that is introduced in Section 4.4.
If the VR game user can not find a nearby cloudlet, it turns to offload its computing tasks to the next level of edge-cloud units, which is the micro-cloud edge servers connected to cellular base stations. Micro-cloud edge servers are small edge units that have limited computing and energy resources. These servers are deployed to provide the computing and energy resources at the edge of the RAN and thus, achieve higher latency efficiency and reduce the traffic passed to the core network. Micro-cloud edge servers receive and handle computing tasks from VR users or from corresponding cloudlets that have not sufficient computing or energy resources.
Based on the available resources and the considered offloading algorithm, micro-cloud edge servers handle the received computing tasks or offload them to the higher edge-cloud units, i.e., mini-cloud units. Each group of micro-cloud units are connected physically with a higher computing and energy capability edge-cloud unit; referred to as mini-cloud. All distributed mini-cloud edge units are connected directly to the core network cloud that represents the interface to the application server.

360-Degree Video Streaming over Proposed System
In order to illustrate the benefits of the proposed system, another important VR application is considered. The 360-degree video streaming technology becomes a demand for many VR applications. The local execution of video processing indicates low performance for high-resolution video applications, e.g., 4 k videos and higher resolutions. To this end, MEC technology should be involved and video computing tasks should be offloaded over an appropriate communication link to the edge-cloud server. Moreover, an efficient data offloading scheme should be introduced for efficient offloading of computing tasks.
The introduction of heterogeneous distributed edge-cloud servers provides the computing and energy resources to the mobile VR devices and thus, video processing and decoding tasks can be offloaded. The considered MM-MEC system can be used for achieving high efficiency of 360-degree video applications of mobile VR devices. However, this kind of VR application requires a proper communication interface that achieves high spectral efficiency, proper for video applications with the required QoE.
To achieve high transmission QoS for VR-360-degree video applications, we consider millimeter wave (mmWave) as the communication interface. The IEEE 802.11ad standard is a multi-Gigabit wireless standard that uses the V band at a frequency of 60 GHz. The use of high wireless bandwidth is efficient for achieving a higher capacity of video-based applications. The main issue with the mmWave compared with the traditional interfaces, e.g., WiFi, is the limited communication range and thus, it is recommended for outdoor applications rather than indoor ones. Due to the recent advances in the antenna design, recent techniques have been developed for adopting mmWave to indoor applications.

Related Works
In recent years, numerous approaches and optimization models have been proposed for addressing the challenges of mobile devices using MEC by applying the computation offloading concept. Most of these studies handle only two levels of computation offloading in MEC systems [20,21], while few studies address the multi-level computation offloading [22]. In this section, a brief overview of the common approaches will be introduced.
Colman-Meixner et al. [30] introduced a 5G City and discussed how the advanced media services will be facilitated using 5G technology such as ultra-high-definition video, augmented and virtual reality. In addition, the opportunities provided by 5G technology and changes in the work of the telecommunications service provider are being studied. Furthermore, three different use cases are presented and their use in public networks, as well as the advantages of using this model for infrastructure owners and media service providers, is described. While in [31], Elbamby et al. studied the problem of low-latency wireless networks of virtual reality. Then, to solve this problem, the authors proposed to use information about user positions, proactive computing and caching to minimize computation latency.
Real prototypes for VR applications were implemented in which some of them used edge computing [32,33], while other prototypes did not use edge computing [34,35]. For the prototypes which used edge computing, Hou et al. discussed how to enable a portable and mobile VR device with VR glasses for wireless connecting to edge computing devices in [32]. In addition, the authors explored the main issues which are associated with the inclusion of a new approach to wireless VR with edge computing and various application scenarios. Furthermore, they provided an analysis of the delay requirements to enable wireless VR, and to study several possible solutions. Whereas, the computation offloading for VR gaming applications has been considered in [33] in which minimizing the network latency is the main goal.
Meanwhile, in [34], Hsiao et al. dealt with issues related to information security and addressed the existing security system shortcomings in augmented reality (AR) technologies, artificial intelligence, wireless, 5G, big data, massive computing and virtual stores. While in [35], Le et al. addressed the computation offloading over mmWave for mobile VR in which 360 video streaming is used as a case study. First, the authors mentioned that using 360 video streaming requires more bandwidth and faster user response. In addition, mobile virtual reality (VR) devices locally process video decoding, post-processing and rendering. However, its performance is not enough for streaming high-resolution video, such as 4 K-8 K. Therefore, the authors propose adaptive computations of a discharge scheme using a millimeter-wave (mm-wave) communication. This offloading scheme helps the mobile device to share video decoding tasks on a powerful PC. This improves the mobile device VR ability to play high-definition video. mmWave 802.11ad wireless technology promises the use of broadband wireless to improve the throughput of multimedia systems.
In 5G, network AR/VR applications will be one of the leading applications in the category of Ultra-Reliable and Low-Latency Communication [20,36]. More specifically, in [20], Liu et al. proposed a computation offloading framework for Ultra-Reliable Low Latency communications in which the computation tasks are divided into sub-tasks that will be offloaded and executed at nearby edge server nodes. In addition, the authors formulated optimization problems which jointly minimize the latency and offloading failure probability. Furthermore, three heuristic search-based algorithms are designed to solve this problem and derive the computation offloading decision. However, cloud computing does not consider environment in this work, which can leverage when the edge server resources are not enough. Similarly, in [36] Viitanen et al. described the basic functionality and demo installation for 360 degrees stereo virtual reality (VR) games remote control. They proposed a low latency approach in which the execution of the VR game is offloaded from the end-user device to the edge-cloud server. In addition, the controller feedback is transmitted over the network to the server from which the game visualized types are transmitted to the user in real-time, like encoded HEVC video frames. Finally, this approach proved that energy consumption and the computational load of end terminals are reduced through utilizing the latest advances in network connection speed.
It is observed from the above review of related work, computation offloading has been investigated for different objectives in which most of them address only two-level architecture in MEC systems. Moreover, most of these studies address a single user or multi-user with only a single computation task. This motivates the work of this paper for jointly considering the computation offloading multi-level multi-player multi-task edge-cloud computing systems. Our work aims to minimize the sum cost of the entire system in terms of network latency and energy consumption.

System Model
In this section, we introduce our system model which is adopted in this paper. As shown in Figure 2, we consider a set of N VR game devices in which each device have a set of M independent computation tasks that needed to be completed. In addition, these devices are connected with a single base station via a wireless channel which is equipped with a mobile edge computing server as well as connected with a centralized cloud computing via core network. We denote the set of VR devices and their computation tasks as N = {1, 2, . . . , N} and M = {1, 2, . . . , M} where the computation tasks can be executed locally on the device itself or will be offloaded and processed remotely on the edge server or cloud server.
In the following subsections, communication and computation models are presented with more detail, followed by the formulation of the optimization problem for our model. The notations used in this study are summarized in Table 1. Local computational time and energy weight of computation task j of VR device i.

Communication Model
We firstly introduce the communication model in edge-cloud system, in which our environment has a single base station connected with a N set of VR game devices through a wireless channel as well as edge computing resources are associated with a single base station and connected with cloud computing via the core network. In addition, each device runs a VR mobile game application which has M independent computation tasks that needed to be completed.
Let us denote α i,j,k as a binary computation offloading decision for the computation task j of VR device i which defines the execution place for the computation task. More specifically, (α i,j,0 = 1) indicates that the computation task j of VR device i will be executed locally by VR device resource, while (α i,j,1 = 1 and α i,j,2 = 1) indicates that the computation task j of VR device i will be offloaded and processed remotely at the base station and cloud server, respectively. Overall, each computation task j must be executed only one time whether locally (k = 0) or remotely ( k ∈ {1, 2} ), i.e., ∑ 2 k=0 α i,j,k = 1. Regarding Shannon law, the maximum uplink and downlink data rate for each VR device where its computation task data are transmitted over the communications channel can be calculated as [4]: where B U and B D denote to the uplink and downlink channel bandwidth, p i , and p bs denote the transmission power of VR device i and base station, ω 0 and G 0 denote to the density of noise power and the corresponding channel gain between the VR device and the base station due to the path loss and shadowing attenuation.

Computation Model
In this subsection, the computation offloading model is introduced. Firstly, as mentioned above, our simulation has a a single base station connected with N set of VR game devices in which each device has M independent computation tasks that needed to be completed. For each computation task j, we use a tuple {a i,j , b i,j , c i,j , τ i,j } to represent the computation task requirement, where a i,j and b i,j represent the size of input and output data that need to be transmitted and received, respectively. Whereas c i,j and τ i,j represent the total number of CPU cycles and the completion deadline that are required for task j of VR device i. The values of a i,j , b i,j and c i,j can be obtained through carefully profiling of the task execution [37][38][39].
Consequently, the computation overhead in terms of execution time and energy consumption for local, edge and cloud execution approaches will be discussed later in detail.

Local Execution
For the local execution approach where the computation task j will be executed locally at the VR device itself, the total execution time and energy consumption can be respectively calculated as: where f L i denotes the computational capability (CPU cycles per seconds) of VR device, and ζ i is a coefficient, that denotes the consumed energy per CPU cycle. we set ζ i = 10 −11 ( f L i ) 2 , where the energy consumption is a superlinear function of VR device frequency [40,41].

Remote Execution
For the remote execution approach where the computation task j of VR device i will be offloaded and processed at the base station or cloud server, the total execution time can be respectively calculated as: where ξ is constant denoting the propagation delay for transferring the computation task between the base station and cloud server.
While denote the offloading, downloading and execution time for processing the computation task j of the VR device i at the base station and cloud server, respectively which can be expressed as follows: where f E i and f C i denote the computational capability of the base station and cloud server which is assigned to the VR device i.
Consequently, the energy consumption for offloading, downloading and processing the computation task j of VR device i remotely at the base station and cloud server can be expressed as follows: where p R i denote the reception power of the VR device and β is constant denoting the energy consumption for being idle while processing the task at the edge and cloud.
In view of communication and computation models, the total overhead for executing the computation task j of the VR device i in terms of time and energy can be respectively expressed as: where w e i and w t i ∈ [0, 1] denote the weighting parameters of execution time and energy consumption for VR device i's decision making, respectively. While T i,j and E i,j are total time and energy which can be expressed as:

Problem Formulation
In this section, we consider the issue of achieving efficient computation offloading for multi-player VR edge-cloud computing systems. Regarding the above communication and computation models, the computation offloading problem is formulated as the following constrained optimization formulation problem: The objective function of the optimization problem is to minimize the sum cost of the entire system in terms of time and energy through the deployment of task offloading. The constraints C1 and C2 are upper bounds of energy and time consumption, respectively. The constraint C3 guarantees that each computation task j must be executed only one time. Finally, the last constraint C4 guarantees that the computation offloading decision variable is binary.
As a result that the objective function is linear as well as all the constraints are also linear, the optimization problem (16) is an integer linear optimization problem in which the optimal solution can be obtained using the branch and bound method [42].

Multi-Player Computation Offloading Algorithm
In this subsection, we present the design of our multi-player computation offloading algorithm which provides comprehensive processes for deriving the optimal computation offloading decision of the constrained optimization problem in Equation (16) in an efficient manner.
First, all VR devices are initializing their offloading decision α i,j,0 = 1, that means local execution. Then, each device uploads the computation tasks' requirements which includes {a i,j , bi, j, c i,j , τ i,j , p T i , p R i , ζ i } and the local computation capabilities f L i to the edge server. Afterwards, the edge server calculates the uplink and downlink data rate for each VR device based on the current number of VR players. In addition, the edge server finds the optimal execution place for each computation task (i.e., local, edge or cloud) through solving the optimization problem in Equation (16). Finally, each VR device receives the execution place for their computation tasks from the edge server, thereby minimizing the sum cost of the entire system in terms of time and energy.
Algorithm 1 provides the detailed process of the multi-player computation offloading algorithm in which O(N M) is the time complexity where N and M denote the total number of VR devices and their tasks, respectively.

Multi-Level with Multi-Edge Computing Architecture
In this section, we will describe the multi-level, multi-edge, multi-user with multi-task system architecture in which the system architecture composed of three-level as shown in Figure 3. Staring from down to up, the first level consists of a set of N VR game devices in which each device has a set of M independent computation tasks that needed to be completed. In the above level, we have a set of K mobile edge computing servers in which the VR game devices are distributed and connected via wireless channel. In addition, we have a backbone router which can connect and control the mobile edge computing server via wired connection as well as designed using SDN technology. Finally, in the last level, we have a single cloud server which can provide more resources as well as connect with a backbone router through the core network.

Algorithm 1 Multi-Player Computation Offloading Algorithm
1: Initialization: Each VR device i initializes the offloading decision for their computation tasks with α i,j,0 = 1, ∀i, j 2: for all each VR device i and at given time slot t do 3: for all each computation task j do 4: Uploads the computation tasks' requirements {a i,j , bi, j, c i,j , τ i,j , p T i , p R i , ζ i } and the local computation capabilities f L i to the edge server.

5:
Calculate the uplink and downlink data rate r U i , r D i for each VR device based on Equations (1) and (2). 6: Solve the optimization problem in Equation (16) and obtain the optimal computation offloading decision values α i,j,k for each compuatation task at VR devices in which the sum cost of the entire system is minimized.

7:
Send the offloading decision values α i,j,k to each VR device.  Regrading the simulation and results for this architecture, there are some issues that should be handled for the computation offloading which are: • Scalability Issue: In the multi-edge environment, the number on VR game devices becomes large, which has more computation tasks that need to be offloaded and executed remotely. Simultaneously, the mobile edge computing servers should provide with the resources' scalability.
Thus, an intelligent algorithm should be designed to scale the computation resources of the overloaded edge server using the computation resources of the neighbor edge server and cloud server which are underloaded. • Load Balancing Issue: As mentioned above, the VR game devices are distributed across the edge computing servers. In addition, due to the randomization distribution of the users, some of the edge servers will be overloaded whereas other servers will be underloaded. Consequently, this will affect the computation offloading process and may lead to poor service quality and long delays due to network congestion. Therefore, it important to propose an efficient algorithm to balance the load between the edge server and improve the quality of service for the VR game device user. • Mobility Issue: In the multi-edge environment, each VR game device has the ability to depart and leave dynamically between the edge servers within a computation offloading period which will be interesting and technically challenging, where the offloading decision and the execution location will be affected. Thus, an intelligent approach should be developed to determine the best execution place for the computation task in which the overall consumption in terms of time and energy will be minimized.

•
Possible Output: Finally, if the scalability, load balancing and mobility issues are handled as mentioned above, the proposed model can operate and derive the computation offloading decision in an efficient manner. Further, the weighted sum cost of the entire system in terms of energy and time will be optimized.

Simulation Results and Discussion
In our simulation settings, a Python-based simulator is used, in which the computer is equipped with Intel R Core(TM) i7-4770 CPU with 3.4 GHz frequency and 8 GB RAM capacity running Windows 10 Professional 64-bit platform. We consider a multi-player multi-task edge-cloud computing system with single cloud server, single small base station, N = 20 VR devices and each device has M = 5 independent computation tasks that can be executed locally or offloaded and processed remotely at the available base station or cloud server. Each computation task has an input data size which is uniformly distributed within the range (5, 15) MB while the output data size is assumed to be 20% of the input data size. In addition, the total number of CPU cycles required to complete each task is assigned to 1500 cycles/bit. The CPU computational capability of each VR device is uniformly distributed within range {0.5, 0.6, . . . , 1.0} GHz, while the CPU computational capability of edge server and cloud are set to 20 and 50 GHz, respectively. The local computing energy consumption per cycle follows a uniform distribution in the range (0.20 × 10 −11 ) J/cycle. The other simulation settings employed in the simulations are summarized in Table 2.  Figure 4 shows the energy consumption of executing the computation tasks for the three different scenarios versus different values of edge server computation capabilities. It is observed from the figure that the energy consumption for our proposed system can achieve the lowest result. Specifically, the edge server scenario is decreasing as the edge server computation capability is increasing and becoming lesser than the cloud execution scenario. This is due to the energy consumption becoming shorter as the VR device is allocated more resources, whereas the cloud execution and local execution scenarios do not affect because they do not depend on the edge server resources. In addition, our proposed system gets the best execution place (i.e., local, edge and cloud).  Figure 5 presents the processing time of executing the computation tasks for different edge server's capability. It is seen in this figure that the cloud execution and local execution policies are not affected by the edge server capability, whereas the time execution of our proposed system and edge server policies gradually decreases as edge server capability increasing. This is because of the shorter latency as the VR devices are allocated more resources. The processing time and energy consumption of executing the computation tasks over different values of input data size (Input data size is uniformly distributed within the range (0,i) MB where i is the value of x-axis) are shown in Figures 6 and 7, respectively. It can be deduced from the figures that the cost wise (time and energy) of our proposed system can achieve better performance and able to maintain a lower overhead in comparison with the other policies. In addition, the edge execution policy exceeds the local execution as the number of data size increases (i.e., data size > 8 MB). This is due to our proposed model can select some appropriate tasks to be executed remotely (i.e., at edge or cloud server) while rejecting others in an optimal way, which minimizes the sum cost of the entire system.

Prototype Implementation and Measurements
Network segment includes hardware and software such as: NI USRP boards, which provide the ability to efficiently study, analyze/emulate LTE/LTE-A networks, 5G New Radio and other wireless technologies; GNU Radio, Amarisoft, srsLTE software packages provide an opportunity to study/test network protocols, signaling technologies and access to the radio channel. This experiment is conducted with the OpenAirInterface software package for virtualization of mobile communication components. Additionally deployed is a virtual environment in which vEPC, Amarisoft are installed in the form of containers and virtual machines for convenient infrastructure management, which makes it possible to emulate virtualized components of a wireless radio access network (HSS, vSGW, vPGW, vMME).
The developed prototype includes transport network, logical transport network that is the link between the network core and the radio access network (RAN). It is based on SDN technology and has the ability to flexibly and quickly manage all nodes. Using API, it can automatically change the configuration depending on the requirements, the second part is the core network, which consists of a virtualized segment. In this part, OpenAirInterFace, srsLTE, AMAISOFRT, openEPC can be used. Each element of this zone can be represented as a docker container and a virtual machine, and the last part is radio access network-an area consisting of NI-USRP 2954R software-defined radio systems (SDR), on which an LTE access network, New Radio sub6, LoRa, NB-IoT, etc. can be deployed. For the organization of radio, interfaces used are OpenAirInterface, GNU Radio, srsENB solutions. A deployed 5G NSA network model includes a set of docker containers in which network elements are packed, and SDR hardware modules. A Docker image is a service with its required dependencies and libraries that are directly related to running the application. For example, for an HSS element docker, the container will include hss.h, security.h, etc. Each container is installed either in separate VMs or multiple containers in one VM. It is important to note that the N26 interface is a key interface between MME (EPC) and AMF as a signaling exchange point for UE radio control. Therefore, these two elements must be at a sufficient distance to satisfy synchronization tasks. In our case, for network hardware devices we used the software-defined radio (SDR.). The program control has a wide range of tools that are used in our laboratory, for example, real-time or offline/post-processing, C ++ and the USRP Hardware Driver As shown in Figure 8, to realize the proposed system, we used in our prototype the following devices: • To create a cluster rendering of VR applications, we used several EDGE hosts with an interface between 10GBE hosts. Using a medium-quality graphics card, the GeForce GT 1030 was used as the rendering core.
Using a cluster with a different number of nodes, we measured the average energy consumption of a node per Mbps of information transfer. Then, after rendering, the image is broadcast to the end device. Such a construction of the architecture will allow us to study the average content delivery delay depending on the number of EDGE hosts in the cluster.
During experiments, the following results were obtained from edge computing clusters: • Average power consumption of one edge computing per one Mbps of information transfer.

•
The average delay between the end device-the VR application server, depending on the number of hosts in the rendering cluster. In general, the use of edge computing at base stations or near network nodes increases power consumption. The more users accessing this unit increases at times (Figure 9). However, the price of electricity annually falls, and at the same time, the power of elemental bases increases. To improve the QoS of the VR application, we have to solve the problem with energy consumption. In Figure 10, the dependence of the response delay on the number of devices in the VR rendering cluster was shown. In case of the larger cluster, the VR application will be generated faster. This happens due to computing parallelism.

Conclusions
In this study, we proposed an efficient multi-player computation offloading approach for VR edge-cloud computing systems. Firstly, the computation offloading is formulated as an integer optimization problem whose objective is to minimize the weighted cost of the entire system in terms of time and energy. This model is considered as latency and energy-aware in which it can select the execution place for the VR computing tasks in away to achieve the best energy and latency efficiency. The proposed system is integrated to achieve the best VR user experience. In addition, a low complexity multi-player computation offloading algorithm is designed to derive the optimal computation offloading decision. Finally, this system is simulated over a reliable environment for various simulation scenarios. Finally, simulations have been conducted to validate our proposed model and prove that the network latency and energy consumption can be reduced by up to 26.2%, 27.2% and 10.9%, 12.2% in comparison with edge and cloud execution, respectively.
In ongoing and future work, a new effective compression layer will be introduced in which the offloading data will be compressed in the low bandwidth state using an efficient algorithm. Hence, the communication time and energy will be reduced and the performance of the entire system will be enhanced. In addition, a more general case will be considered where there are multi-edge server and the mobility issue for the mobile device will be handle in which each mobile device may depart and leave dynamically within a computation offloading period, which will be interesting and technically challenging.

Conflicts of Interest:
The authors declare no conflict of interest.