An Edge Caching Strategy Based on User Speed and Content Popularity for Mobile Video Streaming

: Mobile users’ demands to delay-sensitive video streaming media put forward new requirements for mobile networks, such as architecture optimization. Edge caching as a new paradigm is proposed to enhance the quality of service (QoS) for mobile users at the network edge. Due to the limited coverage of edge cache nodes, the frequent handoffs between base stations would aggravate network trafﬁc overhead, resulting in a bad experience of high latency and service interruption when mobile users browse videos. This paper ﬁrst proposes a three-layer mobile edge network architecture and applied edge caching to video streams to build an efﬁcient caching system. Given the user’s mobility and low latency of mobile video streaming, we propose an edge caching strategy based on user speed and content popularity. Horizontally, the user’s speed affects the spanning area and the buffer size of the cache on edge; vertically, content popularity determines the priority of cached videos. Experimental results demonstrate that our caching strategy outperforms other schemes in terms of the average delay and the cache hit ratio in mobile video streaming scenes compared with the other three classic caching methods.


Introduction
With the rapid development of mobile devices and network technology, we are experiencing an era of explosive mobile data traffic. According to Ericsson's report [1], global mobile video traffic topped 136 EB/month, accounting for 74% of global mobile data by the end of 2024. The demand for large-scale video data information storage and transmission has promoted the development of data centers, especially cloud data centers. The processing method of the traditional cloud computing model is to send data to a remote cloud computing center. After processing and analysis, the data result is sent back to the user terminal [2]. However, according to the Cisco white paper [3], by 2023, the number of devices connected to IP networks will be more than three times the global population, and there will be 8.7 billion handheld or personal mobile devices and 4.4 billion Machine-To-Machine connections.
The rapid growth of networked devices makes the traditional cloud computing model unable to meet the low-latency and high-speed requirements of mobile video traffic. The mobile edge computing (MEC) model takes advantage of storage and computing resources at the edge of the wireless access network to provide services for mobile users, which can make up for the deficiencies of cloud computing [4]. In recent years, edge caching is an effective way to solve the rapid increase in bandwidth demand caused by data-intensive mobile video streaming [5]. Edge caching technology loads data from the data storage into the cache on demand through the mobile edge server, effectively increasing the transmission rate and reducing packet loss and transmission delay. Edge caching strategy has been a hot research issue, and it is widely used in the industrial internet of things [6], vehicular ad hoc networks (VANETs) [7,8], and mobile edge networks [9][10][11].
In contrast to conventional live and on-demand video streaming that are consumed using TVs and PCs, mobile video streaming is generally watched by users on mobile devices with wireless connections, i.e., 3G/4G cellular or WiFi [12]. To meet user quality of experience (QoE) for mobile video streaming, edge network services should be stable, reliable, and low-cost [13]. With the rapid development of video streaming media technology, higher requirements are put forward for the smoothness, clarity, and content quality of video playback. Since video streaming is an online service, video latency is the most intuitive factor that affects the quality of user experience. Considering that the video caching service is also online, the cache hit ratio can better reflect whether users can access their favorite videos accurately under the appropriate network conditions and device memory size.
To ensure that mobile users can enjoy video services satisfactorily, we need to pay attention to its mobility characteristics and low delay requirements. As an example shown in Figure 1, in the edge caching scene of mobile video streaming, video content is provided by the content provider or central cloud server in the core network, and then cached in the macro base station (MBS) and the small base station (SBS) through the backhaul link. The SBS closest to the mobile user transmits the cached content to the user equipment via a high-speed link according to the user's request. In the whole process, the MEC server is responsible for caching strategy scheduling, as well as content compression, decompression, encoding, and decoding tasks. As the current map navigation technology is relatively mature, our research on mobile edge cache is based on the premise that the user's moving path is known. Using edge caching strategies to improve mobile video streaming services currently faces the following problems: (1) When the user path is known, arranging the base stations on the route to buffer appropriate video content to minimize the average delay is the key to improving the QoE. (2) When users move, the handoffs between base stations will be caused, which will suddenly increase time delay and cause video service interruption. (3) In the case of limited base station cache capacity, reasonably arranging cache content and updating to improve cache hit ratio is an essential issue for mobile edge cache. Several relevant studies have been conducted to address the above questions. However, such studies are challenging because many different factors are involved, including user mobility pattern, video content characteristics, and mobile edge network characteristics. Previous research usually focused on a single aspect, such as studying the popularity of mobile video content [14,15], user mobility models [16], or mobile network strategies to support mobile video streaming [17]. The limitation of previous studies is that they did not consider the combined effects of user movement, video characteristics, and edge network deployment.
In this paper, we suggest solving the above problems from the perspective of the speed of mobile users and the popularity of video content. From a horizontal point of view, selecting the cached base station on the moving path according to the user's speed and deciding how much video content to cache, which can reduce the high delay and service interruption caused by the handoffs between base stations. From a vertical perspective, according to the popularity of video content, each base station caches high-priority videos, and uses fine-grained and hierarchical storage to improve the cache hit rate. We combined the road traffic model to build a mobile video caching model based on popularity and speed perception.
The main contributions of the paper are summarized as follows: • First, we propose the hierarchical architecture of the mobile edge cache network, which is divided into three layers: cloud, edge, and user. The edge layer is composed of MBSs and SBSs, and users are distinguished by mobile speed. The hierarchical structure can make full use of the cache space and improve the cache hit ratio. • Second, we propose an edge caching strategy based on user's speed and popularity for mobile video streaming (ECMSP). When the user path is known, we integrate and optimize the caching mechanism of mobile video streams from the perspectives of video popularity and user movement speed. This strategy can significantly reduce the high delay caused by user movement and service interruption caused by the handoffs between base stations. • Finally, we implemented simulation experiments to compare our caching strategy with the other three classic caching schemes. Experimental results show that our scheme achieves the best performance in terms of average latency and cache hit rate for mobile video streaming scenes.
This paper is organized as follows: Section 2 introduces the related works. Section 3 presents the architecture and methodology of the proposed edge caching strategy based on user's speed and popularity for mobile video streaming. The experimental evaluation and performance results are presented in Section 4. Section 5 gives some conclusions and discusses potential future work.

Related Works
The content distribution technology caches popular video files in intermediate servers and agents, eliminating the need for repeated transmissions from remote servers. This can not only significantly save the transmission resource consumption of the core network but also improve the quality of user experience [18].
Content distribution networks (CDN) have been well investigated in the Internet [19]. The work in [20] characterizes the mechanism by which CDNs serve video content, and the implications for video performance, especially emerging 4K video streaming. Researchers of [21] elaborate on the comparative between HLS (HTTP Live Streaming) and RTMP (Real Time Messaging Protocol) with CDN and also without it. The result shows live video streaming with CDN has better performance than without CDN. However, we cannot simply apply traditional CDN-based content distribution techniques to mobile networks, as legacy CDN-based content distribution mechanism is generally designed for traditional wired communication network architecture [22]. In mobile networks, the resources (e.g., storage, bandwidth, computing capacity) and the position of the deployed servers are constrained. More importantly, the hit rate of cached content items could be rather low in mobile networks due to the content dynamics, user mobility, and limited number of users in a cell.
An efficient video content caching strategy is of great significance for improving the quality of user experience, and related research are emerging constantly.
Currently, the combination of mobile edge computing and video distribution caching has become a new method of mobile video stream caching, which mainly handles lowlatency and intensive computing tasks by transferring cloud computing power and resources to the network edge. Researchers of [23] proposed a distributed caching architecture to bring content closer to the requester to reduce content delivery delays. They designed caching algorithms for multiple operators that cooperate by pooling together their co-located caches, in an effort to aid each other, in order to avoid extensive delays due to downloading content from distant servers. Guan et al. [24] proposed a preference learning-based edge caching strategy (PrefCache). It learns to obtain the user's preference for the video, and, if the video meets the user's needs, it will be placed in the cache. Yang et al. [11] proposed a group-partitioned video caching strategy algorithm (GPC) in vehicular networks. The algorithm first partitions the video requesters and then employs the Lagrange function and Lambert function to solve the cache probability matrix as an optimization variable.
In addition, the key to caching mobile video streams is to pay attention to user mobility. It is the focus of many related works to formulate corresponding caching strategies according to the characteristics of user movement. Su et al. [25] proposed a cross-entropybased caching scheme in vehicular networks. They analyzed the features of vehicular content requests based on the content access pattern, vehicle velocity, and road traffic density. The work in [26] shows a content prefetching strategy to deal with the dynamic topology in VANETs. The author believes that the pre-caching scheme should be associated with multiple locations and multiple factors. Through machine learning technology, they combine driver preferences with map navigation information to determine the content cache node. Yao et al. [27] chose to use OBU (On Board Unit) as the edge cache node. According to the probability of users' vehicles reaching different hotspots, they proposed a method based on partial matching (PPM) and chose the nodes that stayed in the hotspots for a long time as cache nodes.
Although there have been some works on the cache hit ratio and cache delay in the edge cache of mobile video streams, they often only pay attention to one aspect of mobility or popularity, especially without considering the impact of user movement speed. They also did not consider the impact of the limitation of cache capacity.

Architecture and Methodology
In this section, we first give the layered architecture of the mobile edge caching network, and then explain the overall system of the mobile video stream edge caching. Next, we deploy the base station to the road traffic model and perform modeling. Finally, we propose our edge caching strategy for mobile video streams based on user's speed and popularity.

The Hierarchical Architecture of Mobile Edge Cache Network
Mobile edge caching provides a highly distributed caching environment close to mobile users, which can be used to deploy applications and services, and store and process content [28]. It can effectively place cloud computing and cloud storage on the edge of the network to enable content. Services and applications are accelerated to improve responsiveness from the edge. By caching the data to the local MEC server in advance, users can download the requested content directly from the local cache, thereby reducing redundant transmission, achieving faster service response and higher quality user experience.
Since our caching strategy mainly focuses on the user's speed, we have changed the architecture of the mobile edge caching network, which is divided into three layers, as shown in Figure 2. Cloud Layer: Mainly composed of a central cloud server, and the video content is provided by the content provider of the core network.
Edge Layer: Composed of a MEC server and a base station. The MEC service is responsible for scheduling and coordination, and the base station is responsible for receiving and buffering video content to provide to the user. Base stations are deployed in groups of MBSs and SBSs.
User Layer: Different users are distinguished by different moving speeds, and different speeds have different caching strategies for them. They hold devices that can watch videos and use them to request content from the upper base station.
Based on the above three-layer mobile edge network, we apply mobile edge caching to mobile video streams to form a complete set of efficient caching systems. As shown in Figure 3, the system consists of three parts, video traffic, mobile edge network, and core network. Video content comes with its own bit rate, transmission delay, and popularity attribute to distinguish, which represents the mobile video stream that can be seen on the market at present. The mobile edge network is responsible for managing user information, request queues, network control, and caching and updating strategies. The user model and the video model are combined. That is, each user not only has his own attributes such as moving speed and location, but also makes a request for one of the pieces of video content. The system forms a user video request queue according to the timeline, and transmits the video in the cloud or edge cache to the user device according to the specified location through the task request and response mechanism. Among them, the mobile edge server is responsible for monitoring the operation of the network, and does an excellent job in network congestion control and coordination with the road traffic model. In addition, the cache strategy includes two parts, one is the pre-caching strategy when the cache space is not full, and the other is the cache update strategy after the cache space is full. These three parts perform their duties and coordinate and cooperate to ensure the stable operation of the cache system.

Network Model
The deployment method of the base station is shown in where γ is the Zipf index [29]. The transmission distance of the video content can be calculated from the real-time position coordinates of MBSs, SBSs, and mobile users. Among them, the maximum transmission distances from MBSs to SBSs and SBSs to UEs are denoted by d MS and d SU , respectively. The spectrum efficiency performances of MBS m, SBS n, and UE U u is given by means of where P m,n and P n,u are the transmission power. Path loss index corresponding to interference link is expressed as: α c , α v , α cv . N 0 is the unilateral power spectral density of additive Gaussian white noise.

Cache Hit Ratio
The cache hit ratio is the most intuitive indicator of the effectiveness of the cache strategy. When a UE requests a video segment in F , we can display cache hit probabilities as follows.
Based on the segmented transmission technology of video streaming media [30], the video content is cached in block grouping. Assume that, in the i-th prevalent sequence, there are video contents of diverse sizes in the G i (i ∈ [1, N]) group, while the average video size of group g(g ∈ [1, G i ]) is S ig . The ratio of video clips in group g to aggregate files is indicated by s ig . Furthermore, given a cache memory of S n and a request segment of F i , the minimum cache hit probability of UEs can be denoted by S n p i c nip s ig , where the caching strategy for all UEs can be expressed as C = c nip N×U×[G 1 ,G 2 ,···G u ] T . Hence, the cache hit probability of a video requested to a SBS is

Average Delay
The average delay is the most important indicator that affects the quality of user experience. After obtaining the video content F f , the MBS m sends content f to the SBS n. The delay of acquiring content F f p from m to n can be obtained by where the transmission distance from m to n is demoted by d m,n , and the average bandwidth during the route from m to n is denoted by bw m,n . The possibility for sending the copy of video content f can be expressed as req n, f , which is given by req n, f = 1, If there is a user request 0, Other Here, req n, f = 1 means that SBS n has completely cached video content f . The transmission of video content from the SBS to the UE requires a similar process. To sum up, the average delay of video caching is given by

Problem Derivation
Our research work aims to maximize the hit rate of mobile video streaming cache and minimize the average transmission delay for mobile users to obtain the video. In this section, we propose the problem of maximizing video cache hit ratio and minimizing transmission delay, and present a mobile video cache strategy algorithm based on speed and popularity to obtain optimal variable c nip .
The joint caching hit ratio maximization and caching delay minimization problem is formulated by To find the solution to problem P, we first give the following theorem: Then, we apply the relaxation variable η n to find the optimal result by the Karush-Kuhn-Tucher (KKT) condition. With η n , we can get the Lagrange function as follows: In addition, a KKT-based solution to the above optimization problem must satisfy the following conditions: where the first equation represents the necessary condition for obtaining the extreme value. The second denotes the coefficient constraint. The last equation means that the file size must not exceed the RAM size of users. The optimized model above can be evaluated by means of conditional solution δL(A,η) δc nip = 0.

Algorithm Design
In this part, we first explain the caching mechanism of our proposed caching strategy, and then propose the ECMSP algorithm to maximize the profit function by setting and solving the caching probability matrix.
When a user moves from area A to area C as shown in Figure 5, the duration of the video requires the user to span the area covered by three base stations. Calculate the user's stay time in each area based on user's average speed, and then calculate the size of the video content that needs to be cached in each area based on the transmission rate. In the area at the junction of base stations, additional video content needs to be cached due to speed calculation errors and the handoffs between base stations. When there are multiple types of video content that need to be cached, the content with higher priority is selected according to the popularity of the video. Next, we give the basic algorithm of the ECMSP strategy, which is mainly divided into two parts: grouping and optimization. Our algorithm is shown in Algorithm 1. We group MBSs and SBSs and cooperate to cache popular videos. First, set the buffer capacity R and S of different edge base stations, the total video popularity order p, and other attributes F f of each video content. We initialize the communication radius d M and d N of MBSs and SBSs, the Poisson distribution intensity λ, the user's position coordinates and speed U u , the background thermal noise power δ 2 , etc. After a series of cycles and updating the caching strategy matrix, we obtain the optimal caching strategy matrixĉ nip and the optimal total revenue function value p total,max . Algorithm 1 Edge caching strategy based on mobile speed and popularity 1: Getting the initialization matrix dimension 2: beacon = 1 3: while beacon == 1 do 4: while m < M do 5: m + + 6: while n < N do 7: n + + 8: while g < G n (n) max do 9: g + + 10: Derive the cache probabilityĉ nip with the manner in (9) 11: if c nip == 0 then 12: Restricting the optimal caching strategy matrix 13: c nip = 0 14: end if 15: Refresh matrix C and receive a new C g 16: end while 17: end while 18: end while 19: if C g == C then 20: beacon = 0 21: end if 22: end while 23: Calculating the maximum value of total revenue function p total,max in (8)

Simulations and Results
In this section, we will evaluate the cache performance of our edge caching strategy in a mobile simulation network environment from different performance indicators.

Parameters' Settings
Based on the above caching strategy, we implement with our simulation through Python. The application scenario is defined as mobile video pre-caching in an edge computing network environment. In this simulated urban traffic environment, mobile users have different moving speeds due to different vehicles. These users browse a variety of videos through handheld mobile devices. By tracking the user's speed, we cached the video content in advance in the base station on its path, thereby reducing the time delay for users to obtain the video and the video playback jam, and improving the user experience quality. Only when the size of the video is large is there a greater possibility that the video playback will freeze. Due to the different popularity and bit rate of video content, we set the size of the video content package in the simulation experiment to 500 MB to 1000 MB. The mobile video behavior data set used in the experiment is selected from a part of real data collected by a video provider company [31]. We observe that the popularity of mobile video content follows a power-law distribution, as shown in Figure 6. According to the 4G-LTE communication standard [32], we set the wireless transmission rate of the uplink/downlink between the SBSs and the mobile user's handheld device to 50 Mbps, and the wired transmission rate among SBSs, MBSs and the central cloud server is 100 Mbps [33]. In a natural environment, we found that the coverage of the base station signal is affected by the density of buildings and population. In the simulation experiment, we set the service coverage radius of SBSs and MBSs to 150 m and 500 m, respectively. Furthermore, the experiment set up 7 MBSs and 72 SBSs in a cellular distribution mode. The users of handheld mobile devices have different speeds and follow the Poisson distribution. The detailed simulation parameter settings are shown in Table 1.

. Comparison Schemes
To more intuitively evaluate the caching performance of our proposed strategy (ECMSP), we compared it with the other three solutions: • RPCC (Random Proactive Content Caching scheme), it sorts the content according to its popularity, and randomly caches highly popular content. • MLCP (Machine Learning-based Content Prefetching scheme [26]), it uses machine learning algorithms to detect network traffic, and uses a multi-location and multifactor prefetching scheme to pre-cache content in the VANET. • CCMP (Cooperative Caching based on Mobility Predicting [27]), it selects the base station with the longest stay on the user path as the cache node.

Performance Metrics
• Transmission delay For video streaming services in a mobile edge computing environment, transmission delay is the most intuitive performance indicator that affects the quality of user experience, and it is also an essential requirement that reflects the performance of the caching strategy. In the experiment, the time delay here is expressed as the time interval from when the mobile user requests the video content to the time when the content is obtained, which mainly includes the process of request transmission, task processing, content transmission, etc. • Cache Hit Ratio Since we are testing the performance of the edge caching scheme, we selected the cache hit ratio of SBSs as an indicator to reflect the performance of the cache, which can reflect the effectiveness of the video content pre-caching process.

Average Delay
As shown in Figure 7a, we tracked 300 mobile users, and it illustrates the results of average delay with simulation time. It can be seen from the figure that the time delays of these four schemes all show a trend of rising first and then falling from the first second. At the beginning of the simulation, the caches in MBS and SBS are empty. In all schemes, the base station needs to ask for the central cloud server to serve mobile users' requests according to their respective strategies. Therefore, the average delay of each scheme has little difference, and it starts to increase with the increase of mobile user requests. As the simulation continues, the caching strategies of different schemes all begin to work and reduce the time delay to varying degrees. Among them, the average delay of the ECMSP proposed by us is the lowest compared to the other three schemes. Because RPCC randomly pre-caches video content with high popularity and does not consider the spatial aggregation of video content, it has a sizable average delay and poor experimental results. In addition, MLCP considers the video traffic distribution of the cache network but ignores the changes in traffic caused by user movement, so the pre-caching effect is relatively poor. The pre-caching strategy of CCMP based on PPM ignores the influence of different speeds of mobile users. This is because the user's moving speed results in different stay times in different areas. Furthermore, the peaks of each scheme appear at different times, and the earlier delay peaks have stronger dynamic adaptability, which proves that our ECMSP has the best dynamic adaptability and the strongest service capabilities. In Figure 7b, by increasing the number of mobile users, we found that the average delay of all caching schemes will increase, but our ECMSP still achieved the lowest latency because, as mobile users increase, user requests for video traffic increase. On the one hand, base station connection handover and service interruption may increase. On the other hand, the increase in video traffic may exceed the capacity of the cache in the base station, and the cache is updated more frequently. The reason why the increase in mobile user requests has little impact on our strategy, and we believe this is mainly because of our comprehensive pre-cache decision-making mechanism and effective cache update strategy. Figure 8a shows the cache hit ratio results with 300 mobile users. The time when the inflection point of the curve in the figure appears illustrates the dynamic adaptability of different caching schemes. The inflection point of the ECMSP curve appears earliest, indicating that it first reaches the peak of the cache hit ratio. It can be seen from the figure that the cache hit ratio of RPCC is the lowest. This is because its cache is random and does not focus on hot videos. CCMP's cache hit ratio performance is better than MLCP. However, our ECMSP achieves the highest cache hit ratio. This is because ECMSP first caches highly popular video content in SBSs, and then caches video content of an appropriate size in the base station on its path according to the speed of the mobile users, thereby avoiding the problem of a decrease in cache hit ratio caused by the handoffs between base stations. As shown in Figure 8b, with the number of mobile users increasing, the cache hit ratio of RPCC, MLCP, and CCMP decreases, just as the analysis of Figure 7b, it is caused by the increase in requests for video content. However, the cache hit ratio of our ECMSP gradually stabilizes with the rise in the number of users, and has been maintained at a relatively high level, as it comprehensively considers the popularity of video content and the speed of mobile users, timely updates the cache, and can adapt to changes in the dynamic environment in time. Figure 9a,b illustrate the impact of changing the cache capacity of SBSs on the four schemes' average delay and cache hit ratio when the number of mobile users is 300. As we have seen, as the cache capacity of SBSs increases, the average latency of all schemes has been significantly reduced, especially the previous RPCC, which has a poor cache effect because it is less optimized than other schemes for caching strategies. For MLCP, CCMP, and ECMSP, although the average delay is also reduced to varying degrees, the effect is not very obvious. Nevertheless, our ECMSP still has the least average delay. Increasing the cache capacity of SBSs has a more significant impact on the cache hit ratio because the larger the cache capacity, the more video content can be cached. With the increase of SBSs cache capacity from 2 to 6 GB, our ECMSP cache hit ratio increased by 21%, and it has been higher than the other three solutions. The experimental results can provide some valuable experience for improving the performance of the caching scheme.

Discussion
This paper has proposed an edge caching strategy for mobile video streaming, combined with map navigation services, which can cache the corresponding content in advance on the base station on the user's route based on the user's moving speed and the popularity of the video content when the user's moving path is known. The purpose of this research work is to reduce the impact of the handoffs between base stations and service interruption during user movement, reduce the time delay for users to obtain video, improve the cache hit ratio of edge base stations, and improve the quality of user experience through caching strategies.
In order to evaluate the effectiveness of this strategy, we conducted a comparative experiment with the method in the latest literature. Although it was a simulation experiment, the data set of the video content in our experiment was selected from real data collected by a company engaged in the industry. By comparison, our proposed caching strategy has achieved better performance than other solutions in terms of average delay and cache hit ratio. The fly in the ointment is that only road models can be used in the simulation of mobile scenes, and the real factors of road traffic cannot be considered, such as the signal blocking by buildings or vehicles, and the influence between roadside signal sources. In addition, testing the performance of the caching strategy in the field will encounter other real challenges. On the one hand, the user's mobile information is obtained, which involves the user's security and privacy. A strategy is needed that can obtain the user's moving path in advance without infringing on the user's interests. On the other hand, some videos cannot be communicated between users due to copyright issues, which involves the intellectual property rights of the video content, and a specific strategy is needed to solve this problem. Therefore, the following other in-depth research needs to be carried out in actual cases to determine the effectiveness of the caching strategy.
In addition to the improvements in the experimental part, our future improvements will focus on the following. First of all, most of the current video content management and distribution technologies are based on traditional CDN systems, and there is no recognized solution for the research of mobile scenarios. Applying some mature technologies of CDN to mobile edge computing is one of our future research directions. Secondly, our caching strategy is based on the premise that the user's moving path is known; this requires the help of real-time map navigation services. The current user mobile location information is controlled by the operators related to map navigation, so a specific incentive mechanism is needed to obtain this information [34]. Furthermore, we consider optimizing the precaching design to refine the attributes of users, services and content, such as expanding the pre-caching capacity in densely populated areas, or setting additional capacity for emergency tasks to ensure timely response; as some video content has prominent regional characteristics, we need to add regional attributes to the video to increase their cache priority in certain regions. In addition, it is also a good idea to apply machine learning algorithms to the cache update mechanism.

Conflicts of Interest:
The authors declare no conflict of interest.