Dual Threshold Adaptive Dynamic Migration Strategy of Virtual Resources Based on BBU Pool

: The rapid development of mobile communications and the continuous growth of service needs lead to an increase in the number of base stations (BSs). Through virtualization and cloud technology, virtual Baseband Units (BBUs) are deployed on a virtual machine (VM) to build a BBU pool to achieve hardware resource sharing, which not only saves BS construction costs but also facilitates management and control. However, too high or too low server resource utilization in the pool not only a ﬀ ects the performance of the virtual BBU but also increases the maintenance cost of the physical equipment. In this paper, BBUs are virtualized to construct a virtual BBU pool based on the OpenStack cloud architecture and a dual threshold adaptive dynamic migration strategy is proposed in this scenario. Establish upper and lower threshold of resource utilization of the servers in the pool and the strategy determines whether the dynamic migration is triggered according to the resource utilization of each compute node. If the migration is triggered, the strategy selects the virtual resource to be moved out and the target node to realize the dynamic migration to achieve the purpose of balancing the server load and saving energy consumption. The migration strategy proposed in this paper is simulated on Cloudsim and the experimental results show that the strategy can e ﬀ ectively reduce the number of migrations and migration time on the basis of reducing energy consumption and SLA violations. This paper successfully deployed the strategy on the OpenStack platform, which implements dynamic migration autonomously to save the overall energy consumption of the BBU pool, instead of manual operations. there is no real implementation of BBU related protocols on the VMs. BBU performs operations such as baseband processing or RRH return signal processing, which has a load e ﬀ ect on the VMs. This e ﬀ ect can be simulated by the script running on the VMs. The script implements CPU and memory pressure operation, simulates the dynamic load changes brought by BBU performing signal processing protocol and other functions. In the implementation, controller and compute nodes are installed in the ubntu16.04 64-bit operating system. In the OpenStack experimental environment, the controller node is deployed on an intel i5-6500, 12G memory, x86-architecture server with dual network cards. Twelve VMs (ubntu16.04 64-bit OS) are set up and Table 3 shows the deployment of VMs and compute nodes before implementing the migration strategy proposed in the paper.

baseband processing and high-level protocol processing) on VMs, to build a centralized BBU pool and use OpenStack cloud platform for unified control and deployment of virtual resources in the pool. In this scenario, this paper proposes a dual threshold adaptive dynamic migration strategy, which sets the upper and lower threshold for the physical nodes' resource utilization in the BBU pool and realize adaptive dynamic migration. The main contributions of this paper are summarized as follows: (1) We virtualize the BBU and build a virtual BBU pool combined with OpenStack cloud architecture. In the OpenStack-based BBU pool, all the processing resources provided by physical servers can be managed and allocated by a unified real-time virtual operating system.
(2) We propose a dual threshold adaptive (DTA) dynamic migration strategy divided into three parts. The first part is the adaptive migration trigger strategy combined with the Kalman filter algorithm, which can avoid a certain instantaneous peak of server resource utilization to trigger unnecessary migration because of too high or too low and effectively determine the time to trigger migration. The second part is selection strategy of VM, based on the number of migrations and the migration time. The last part is selection of target server node, combined with prediction and resource utilization to make a comprehensive judgment.
(3) The simulation is carried out first by the CloudSim platform to evaluate the proposed migration strategy, which can improve energy consumption and SLA violation and significantly reduce the total number of migrations and migration time. Then the strategy is implemented on the OpenStack and the results are finally analyzed that this strategy can avoid invalid migration, instead of manual, realize adaptive dynamic migration and achieve the load balancing purpose.
The rest of the paper is organized as follows. Section 2 presents a system scenario of BBU pool based on OpenStack and introduces related works of migration and parameter definition. In Section 3, we describe the migration strategy in detail. Section 4 shows the simulation results on Cloudsim and OpenStack. Then conclusions are drawn in Section 5.

Virtual BBU Pool based on OpenStack Architecture
This paper adopts the distributed architecture of BBU and RRH, as shown in Figure 1. BBUs are clustered as a BBU pool in a centralized location. RRHs support high capacity in hot spots, while the virtualized BBU pool provides large-scale collaborative processing and cooperative radio resource allocation [15]. The pool has at least one control management to control the performance of all virtual resources.
Through virtualization technology, a virtual layer is built on a general server cluster of the X86 architecture. Multiple VMs are run on the virtual layer and a virtualized BBU pool is constructed through the running VMs deployed on many common servers. Integrate the BBU pool with the OpenStack and remotely connect to the RRH. The resources used by each VM will be planned according to the resource allocation plan of the BBU pool and support different functions of baseband processing units. The OpenStack cloud platform solves the problem of the automatic creation of virtual resources. Figure 2 shows the virtual BBU pool architecture based on OpenStack, which can manage and deploy the pool and implement dynamic migration. With the OpenStack cloud architecture, physical servers can be deployed as compute nodes, these nodes and the virtual BBU pool deployed on these nodes can be managed and controlled by a controller node (also deployed on a physical server). This paper adopts the distributed architecture of BBU and RRH, as shown in Figure 1. BBUs are clustered as a BBU pool in a centralized location. RRHs support high capacity in hot spots, while the virtualized BBU pool provides large-scale collaborative processing and cooperative radio resource allocation [15]. The pool has at least one control management to control the performance of all virtual resources. Through virtualization technology, a virtual layer is built on a general server cluster of the X86 architecture. Multiple VMs are run on the virtual layer and a virtualized BBU pool is constructed through the running VMs deployed on many common servers. Integrate the BBU pool with the OpenStack and remotely connect to the RRH. The resources used by each VM will be planned according to the resource allocation plan of the BBU pool and support different functions of baseband processing units. The OpenStack cloud platform solves the problem of the automatic creation of virtual resources. Figure 2 shows the virtual BBU pool architecture based on OpenStack, which can manage and deploy the pool and implement dynamic migration. With the OpenStack cloud architecture, physical servers can be deployed as compute nodes, these nodes and the virtual BBU pool deployed on these nodes can be managed and controlled by a controller node (also deployed on a physical server). The research in Reference [16] shows that power consumption by physical machines can be described by a linear relationship between power consumption and CPU utilization. The study also

Model of Power
The research in Reference [16] shows that power consumption by physical machines can be described by a linear relationship between power consumption and CPU utilization. The study also points out that a free physical machine uses approximately 70% of its power consumption on average when it is fully utilized. Define the power consumption as a CPU utilization function from Equation (1).
where P max is the maximum power of a server in the running state, k is the percentage of power consumed by an idle physical server and u is the CPU utilization. As the utilization of CPU changes over time due to the workload variability and u(t) is a function of the time. The total energy consumption can be defined by Equation (2). According to this model, the energy consumption is determined by the CPU utilization.

Cost of VM Dynamic Migration
Dynamic migration of VMs allows transferring VMs between physical nodes with short downtime. However, dynamic migration has a negative influence on the performance of applications running in a VM during a migration. Studies in Reference [17] found that a reduction in performance and downtime depends on the behavior of applications. The length of a dynamic migration depends on the total amount of memory used by the VM and available network bandwidth. In order to avoid performance degradation, a VM migration cost model is adopted in Reference [17] to help choose migratable VMs. As the authors say in Reference [18], a single VM Migration can cause performance degradation and can be estimated by an extra 10% of CPU utilization and this implies that each migration may cause SLA violations. Therefore, it is crucial to minimize the number of VM migrations and select the VM using the least memory. Thus in the migration strategy from this paper, the performance degradation of VM j is defined in Equations (3) and (4).
where U d j is VM j's total performance degradation, t 0 refers to the time when the migration starts, T m j is the time taken to complete the migration, u j (t) is the CPU utilization at time t. M j is the amount of memory used by VM j and B j is the network bandwidth. The smaller the memory of VM to be migrated, the smaller the migration time and the smaller the impact on the system.

SLA Violation (SLAV) Metric
In a cloud environment, since many users are competing for resources, each cloud service provider needs to ensure that the application requirements are met, which is usually defined in the form of SLA [19]. It is defined as a violation of the SLA between the resource provider and the user when the performance request of the resource exceeds the available capacity and the Equation (5) as follows: where J is the number of hosts, T sx is the total time that utilization of host x reach to 100% and T ax is a lifetime (total time that the host is active) of the host x. When host utilization reaches 100%, the application performance is bounded by the host. N shows a number of VMs, Cd i estimated as 10% CPU utilization of VMi in all migrations. Cr i is the total CPU requested by VMi. Table 1 shows the parameter definitions that will be used in Section 3.

Migration Strategy Description
A dual threshold adaptive (DTA) dynamic migration strategy proposed in this paper is divided into three parts: (1) adaptive trigger migration strategy based on Kalman filter algorithm prediction; (2) the selection of VM to be migrated; and (3), the selection of target physical node. These parts will be discussed in the following sections.

Adaptive Trigger Migration Strategy based on Kalman Filter Algorithm Prediction
It is necessary to monitor the running state of each physical compute node and set migration conditions for the resource state to determine whether the dynamic migration should be triggered or not. In this paper, the upper threshold is set to alleviate the problem that the physical server increases energy consumption and SLA violations because of the high load, so as to ensure that the resource utilization of the running node in the pool meets the demand on the basis of saving energy consumption. Set a lower threshold and migrate the VM on the compute nodes elsewhere and shut down or sleep the physical server to reduce the energy consumption of the physical cluster, when the server utilization is below the threshold. Because the resource utilization of physical servers is not stable, the CPU or memory utilization at a certain time may cause unnecessary migration due to too high or too low condition, result in a waste of system energy consumption. In this paper, the Kalman filter algorithm is used to predict the resource utilization, which only a small amount of data is needed to get the predicted starting point (the more data will make the result better). It can adjust itself and automatically set the parameters from continuous observation. Because the server resource utilization is instantaneous, the use of prediction technology can effectively prevent the invalid migration of virtual resource. The predicted migration trigger strategy based on the Kalman filter algorithm is used to determine whether the server resource utilization is higher or lower than the threshold.
Kalman filter takes the least mean square error as the best criterion to find a set of recursive estimation models. The basic idea is to use the state space model of signal and noise, to update the estimation of state variables by the estimated value of the previous time and the observed value of the present time and to find the estimated value of the occurrence time. The estimator is considered as a linear system and γ(k) represents the resource utilization of the server in the migration strategy at the current time.
Firstly, the predicted value γ(k) is set as the optimal estimation of the previous state with Equation (6), whereγ(k−1) is the last estimated resource utilization.
Electronics 2020, 9, 314 7 of 19 Then, the estimated value of the current time is calculated by Equation (7). Where γ(k) is the observation of the current time. Kg(k) is the Kalman gain and calculated by the Equation (8) and where R and Q are the variances of system noise and observation noise, respectively. P(k) represents the deviation of the filter and is updated by Equation (10) to continue iteratively filter.
The migration strategy proposed in this paper is to judge the relationship between resource utilization and threshold combined with prediction. A detailed process is shown in Figure 3 as a trigger flow. In the proposed trigger strategy, the load utilization of each compute node is polled first to monitor and the CPU and memory utilization are used as the trigger migration criteria. Then judge the relationship of utilization and the threshold. If one or more parameters are not within the threshold range, use the prediction model to determine the relationship between the load utilization and the threshold at the next moment. If the predicted value exceeds the upper threshold, the compute node is considered overloaded and calculate D CPU /D Mem , which is the difference between overload resource utilization and the upper threshold. Otherwise, the strategy enters the underload judgment process with prediction model. If the predicted value is below the lower threshold, the node is identified as the underload node.
Electronics 2020, 9,314 7 of 17 The migration strategy proposed in this paper is to judge the relationship between resource utilization and threshold combined with prediction. A detailed process is shown in Figure 3 as a trigger flow. In the proposed trigger strategy, the load utilization of each compute node is polled first to monitor and the CPU and memory utilization are used as the trigger migration criteria. Then judge the relationship of utilization and the threshold. If one or more parameters are not within the threshold range, use the prediction model to determine the relationship between the load utilization and the threshold at the next moment. If the predicted value exceeds the upper threshold, the compute node is considered overloaded and calculate CPU Mem D /D ,which is the difference between overload resource utilization and the upper threshold. Otherwise, the strategy enters the underload judgment process with prediction model. If the predicted value is below the lower threshold, the node is identified as the underload node.  To implement a migration strategy, the VM to be migrated is selected from the overload/underload compute node according to the VM selection strategy from Section 3.2 and then the target server node to be migrated is determined from Section 3.3 to realize dynamic migrations. The strategy proposed defines the triggering of overload as three cases. (a) type 0: both CPU and memory utilization exceed the upper threshold, (b) type 1: only CPU utilization exceeds the upper  To implement a migration strategy, the VM to be migrated is selected from the overload/underload compute node according to the VM selection strategy from Section 3.2 and then the target server node to be migrated is determined from Section 3.3 to realize dynamic migrations. The strategy proposed defines the triggering of overload as three cases. (a) type 0: both CPU and memory utilization exceed the upper threshold, (b) type 1: only CPU utilization exceeds the upper threshold and (c) type 2: only memory utilization exceeds the upper threshold. If one or more utilization parameters are below the lower threshold, the dynamic migration is triggered and the underload node is shut down to reduce energy consumption. If all the parameters are within the threshold range, it is considered normal and then enters the next round of judgment process.

The Selection of VM to be Migrated (DTA-VM Selection)
Once the overload compute node is detected, the next step is to select VMs to migrate from the node to avoid performance degradation. In this paper, a VM selection strategy (DTA-VM selection) is proposed. Firstly, the VM with the least number of migrations is selected and then, in the selected VM subset, the strategy selects the VM with the minimum migration time. The migration time can be estimated as the number of RAM used by VM divided by the network bandwidth available to the server node. The proposed VM selection strategy shows in Algorithm 1.
VM k _CPU and VM k _Mem represent the difference between the utilization of VM's CPU and memory and D (D CPU or D Mem ). If the difference is greater than or equal to 0, it is indicated that only one VM needs to be migrated to lower the resource utilization rate of the server below the upper threshold and this VM will be stored in the VMlist_count subset. In the subset of VMs that satisfy the minimum migration counts in the VMlist_count, the memory usage of each VM is sorted and the smallest memory is selected as the final VM to be migrated. If the list VMlist_count is empty, the strategy will judge the resource utilization by calculating f from algorithm 1 for each VM on the overloaded host and select the VM, which can alleviate the server load to the greatest extent.  When the compute node triggers migration because of too low load, the VM selection is not needed because all the VMs in the node are then directly migrated and this node will be closed in order to save energy.

The Selection of Target Physical Node (DTA-Target Selection)
When the Nova_Scheduler module selects the target node in the OpenStack, it selects the node which meets the requirements of VM resources and has the most memory surplus, regardless of other factors. Therefore, the strategy of selecting the target server node proposed in this paper takes into account not only the current CPU and memory usage of the server node but also the resource usage of the target node after the migration of the VM. The target compute node selection strategy is proposed based on Kalman filter prediction, which gives priority to select the nodes in the set within the normal threshold range to reduce the computational workload.
The target node selection strategy shows in Algorithm 2, for each other server node in the cloud-based BBU pool, to avoid unnecessary migration, prediction technology is used to predict the resource utilization of the running nodes that can accept migration in the pool. Only when the predicted values are within the threshold range, the size of the decision value F from algorithm 2 of CPU and memory utilization will be sorted and the sorted server list will be returned. The server node listed first is the most suitable target node. The calculation of the F value is similar to f in the previous section and there are also three cases. The selection of the target node in the case of underload is performed according to type 0.

Experimental Environment
The DTA dynamic migration strategy proposed in this paper is firstly simulated and compared on Cloudsim and then applied to the OpenStack cloud platform for concrete implementation. CloudSim [20] is a cloud environment simulation software that can simulate virtualized and cloud-based entities, such as data centers, VMs and physical hosts, to model and simulate a cloud computing system. Based on Cloudsim, we can implement different resource allocation strategies and evaluate strategy performance. OpenStack [21] is an open source project developed jointly by NASA and Rackspace to provide software for building and managing public and private clouds through which any company or individual can build a cloud computing environment. OpenStack is the software of the cloud computing IaaS layer, which provides an infrastructure solution for the needs of scalable private and public clouds of all sizes.
(1) The hardware used in the Cloudsim simulation experiment is a laptop with a pre-installed Windows 10 (64-bit) operating system (CPU type: Intel Core i5-7300HQ, memory 8G). Because CloudSim is based on Java, Eclipse is selected as its operating platform. In this paper, JDK1.8 and  Table 2. The simulation set running time is 12 hours and the scheduling interval is 300 seconds and system resource usage update, operation information collection and VM scheduling are performed. In order to verify the effectiveness of the proposed strategy, it is necessary to compare and analyze the existing VM selection and the target node selection strategy. In this experiment, firstly, the following three VM selection strategies are compared with DTA-VM selection strategy: Maximum Correlation (MC) strategy, Minimum Utilization (MU) strategy and Random Selection (RS) [9] strategy. Then, the proposed DTA-Target node strategy is compared with the IQR, LR and the most memory remaining strategy for OpenStack.
(2) In this experiment, the BBU pool based on the OpenStack cloud environment was built composed of four physical servers. One of them acts as a cloud controller and the other three servers act as compute nodes in this cloud environment. The experimental architecture is shown in Figure 4.
The controller node acts as the whole controller, responsible for controlling resources and the compute nodes act as a resource computing and storage resource node in the cloud environment. Through multi-node deployment, the server hardware failure can be dealt with without much difficulty or the dynamic migration of VM can be performed without interfering with the client. Keystone, Nova, Glance, Neutron, Horizon and Cinder components of OpenStack are deployed on the control node, where Nova can implement management of virtual BBU pools such as creating VMs and implementing dynamic migration operations. OpenStack uses the web graphical interface or command line to control the entire cloud-based BBU pool. In this experimental deployment architecture, each server is configured with two network interfaces. The private network segment is made up of interface1, allowing each component of OpenStack to communicate with each other and the service provided by the cloud environment is provided by interface2, realizing the connection with the Internet. In the cloud system, node clusters implement SSH mutual access without password and dynamic migration is achieved through shared storage. In this scenario, VM instances are stored in shared storage and migration is mainly the migration of instance memory status, which greatly improves the migration speed. In this experiment, the dynamic migration mode is set to the pre-copy mode and there is no real implementation of BBU related protocols on the VMs. BBU performs operations such as baseband processing or RRH return signal processing, which has a load effect on the VMs. This effect can be simulated by the script running on the VMs. The script implements CPU and memory pressure operation, simulates the dynamic load changes brought by BBU performing signal processing protocol and other functions. In the implementation, controller and compute nodes are installed in the ubntu16.04 64-bit operating system. In the OpenStack experimental environment, the controller node is deployed on an intel i5-6500, 12G memory, x86-architecture server with dual network cards. Twelve VMs (ubntu16.04 64-bit OS) are set up and Table 3 shows the deployment of VMs and compute nodes before implementing the migration strategy proposed in the paper.
Length of tasks 108,000,000 / The simulation set running time is 12 hours and the scheduling interval is 300 seconds and system resource usage update, operation information collection and VM scheduling are performed. In order to verify the effectiveness of the proposed strategy, it is necessary to compare and analyze the existing VM selection and the target node selection strategy. In this experiment, firstly, the following three VM selection strategies are compared with DTA-VM selection strategy: Maximum Correlation (MC) strategy, Minimum Utilization (MU) strategy and Random Selection (RS) [9] strategy. Then, the proposed DTA-Target node strategy is compared with the IQR, LR and the most memory remaining strategy for OpenStack.
(2) In this experiment, the BBU pool based on the OpenStack cloud environment was built composed of four physical servers. One of them acts as a cloud controller and the other three servers act as compute nodes in this cloud environment. The experimental architecture is shown in Figure 4.

Controller Node
Keystone  The controller node acts as the whole controller, responsible for controlling resources and the compute nodes act as a resource computing and storage resource node in the cloud environment.

Simulation of Migration Strategy on Cloudsim
In order to compare the performance of the proposed migration strategy with that of the existing algorithms, we consider four indicators: the total energy consumption of the BBU pool when performing the work, SLA violation, the total migration number when the dynamic migration occurs and the dynamic migration time.
The first group: First Fit is the default target host placing method. Under the host detection strategy (Mad, Iqr, Lr, Thr), DTA-VM selection is compared with the following three VM selection strategies-MC, MU and RS. The simulation results are shown in Figures 5-8, compared with the existing strategy, DTA-VM selection effectively reduces energy consumption, SLA violations, the number of migration and migration time. In terms of SLA violations and the number of migrations, DTA-VM selection is reduced by more than 50% compared to MU and the effect is reduced by about half in terms of migration time compared to the other three selection algorithms. Because DTA-VM selection achieves load balancing with as few number of migration and migration times as possible, the consumption caused by migration and SLA violations can be reduced.
Electronics 2020, 9,314 12 of 17 DTA-VM selection is reduced by more than 50% compared to MU and the effect is reduced by about half in terms of migration time compared to the other three selection algorithms. Because DTA-VM selection achieves load balancing with as few number of migration and migration times as possible, the consumption caused by migration and SLA violations can be reduced.        The second group: based on the proposed strategy (DTA-VM selection), MC, MU and RS ion strategy, the proposed DTA-Target selection and First Fit and OpenStack "maxi ory remaining" strategy are compared and analyzed for performance. As shown in Figur e DTA-Target selection proposed is not much different in terms of energy consump ever, the performance of the target node is reduced by more than 50% in terms of SLA viola ompared with the performance of the number of migration and migration time, DTA-T ion is far lower than the existing two strategies. This is because DTA-Target selection rehensively consider the resource state and predict the status of the VM migration to the t to reduce consumption and SLAV caused by repeated migration. In summary, when the DTA-VM selection and DTA-Target selection proposed in this p together, the above four performances can obtain more desirable effects. The simulatio dsim can prove that the DTA dynamic migration strategy has a significant improveme The second group: based on the proposed strategy (DTA-VM selection), MC, MU and RS VM selection strategy, the proposed DTA-Target selection and First Fit and OpenStack "maximum memory remaining" strategy are compared and analyzed for performance. As shown in Figures 9-12, the DTA-Target selection proposed is not much different in terms of energy consumption. However, the performance of the target node is reduced by more than 50% in terms of SLA violation; and compared with the performance of the number of migration and migration time, DTA-Target selection is far lower than the existing two strategies. This is because DTA-Target selection can comprehensively consider the resource state and predict the status of the VM migration to the target node to reduce consumption and SLAV caused by repeated migration. node to reduce consumption and SLAV caused by repeated migration.
In summary, when the DTA-VM selection and DTA-Target selection proposed in this paper work together, the above four performances can obtain more desirable effects. The simulation on Cloudsim can prove that the DTA dynamic migration strategy has a significant improvement in performance.   In summary, when the DTA-VM selection and DTA-Target selection proposed in this paper work together, the above four performances can obtain more desirable effects. The simulation on Cloudsim can prove that the DTA dynamic migration strategy has a significant improvement in performance.     .

The experiment of Migration Strategy on OpenStack
In order to simulate the effect of dynamic change of load, this experiment based on the penStack cloud platform, the VMs with the ID number of 1-6,8 and 12 are tested for irregular CPU r memory stress tests, respectively. The experimental observation time is 270 minutes and the igration strategy deployed on the controller node polls and monitors the resource utilization of ompute nodes every 60 seconds. According to the prediction of the Kalman filter algorithm and the anually set upper and lower limit threshold, it is determined whether dynamic migration is needed r not. In this paper, two groups of experiments are carried out, namely, the upper threshold overload igration and the lower threshold underload migration. The upper and lower threshold values are 0% and 10%, respectively. The dynamic variation of CPU and memory utilization over time is shown n Figures 13-15, which show the experimental results of the dynamic migration of upper and lower hreshold in the migration strategy, respectively. This experiment is mainly to verify the feasibility of he dynamic migration strategy. There will be a short and unavoidable interruption in the migration rocess of the VMs, which will be studied in detail in the future work.
In the initial stage of the experiment, continuous CPU and memory stress test scripts are xecuted for all the VMs on the nodes. As can be seen from the Figure 13, the resource (CPU and emory) utilization of the three compute nodes is in a steady state in the first 30 minutes. After 30 inutes, keep compute1 and compute2 unchanged, perform a more intense load pressure script on ompute2 and the CPU and memory utilization of compute2 are on the rise. Both CPU and memory tilization of compute2 exceeds the upper threshold in about 20 minutes and it does not migrate mmediately. Similarly, in about 125 minutes, the CPU utilization of compute1 exceeds the upper imit threshold but does not migrate immediately according to the prediction. This is because the esource utilization exceeds the upper threshold is temporary at these two times and the unnecessary igration operation is avoided based on the Kalman prediction algorithm. Starting from about 70 inutes, the CPU utilization of compute2 exceeded the upper threshold again. Combined with alman's prediction, the dynamic migration of VM is carried out in accordance with the In summary, when the DTA-VM selection and DTA-Target selection proposed in this paper work together, the above four performances can obtain more desirable effects. The simulation on Cloudsim can prove that the DTA dynamic migration strategy has a significant improvement in performance.

The experiment of Migration Strategy on OpenStack
In order to simulate the effect of dynamic change of load, this experiment based on the OpenStack cloud platform, the VMs with the ID number of 1-6,8 and 12 are tested for irregular CPU or memory stress tests, respectively. The experimental observation time is 270 minutes and the migration strategy deployed on the controller node polls and monitors the resource utilization of compute nodes every 60 seconds. According to the prediction of the Kalman filter algorithm and the manually set upper and lower limit threshold, it is determined whether dynamic migration is needed or not. In this paper, two groups of experiments are carried out, namely, the upper threshold overload migration and the lower threshold underload migration. The upper and lower threshold values are 80% and 10%, respectively. The dynamic variation of CPU and memory utilization over time is shown in Figures 13-15, which show the experimental results of the dynamic migration of upper and lower threshold in the migration strategy, respectively. This experiment is mainly to verify the feasibility of the dynamic migration strategy. There will be a short and unavoidable interruption in the migration process of the VMs, which will be studied in detail in the future work. lectronics 2020, 9,314 15 of 17 equirements of the migration strategy. The load utilization of compute2 has gradually increased and he CPU utilization of compute2 exceeded the upper threshold again at around 70 minutes. ombined with the prediction model to determine that compute2 is an overload node, the dynamic igration of VM is carried out in accordance with the requirements of the migration strategy. At bout 105 minutes, the memory utilization of compute2 exceeded the upper threshold and a dynamic igration was performed by Kalman's prediction. After the migration, the memory utilization rate f compute2 drops significantly and the load utilization of compute2 returns to within the normal hreshold range. Reducing the load pressure on compute2 in about 130 minutes, as can be seen in Figure 14, the oad situation of the computing node gradually becomes stable. From about 160 minutes, the load ressure of compute2 is gradually increased again and both compute2's CPU and memory utilization other idle compute nodes. After the dynamic migration strategy selection, the VM of compute2 migrates to compute3 node. It can be seen that the resource utilization of compute2 drops below the upper threshold after the dynamic migration. Since the physical resources of the compute3 node are very abundant, the load utilization of this node after the migration of the virtual machine has increased slightly but it is not obvious.  Figure 15 shows resource utilization of compute nodes based on lower threshold migration of OpenStack. In the experiment, the intensity of the load pressure script executed by the VMs on compute2 and compute3 is not changed and compute1 gradually reduces the load pressure intensity. From about 240 minutes, the CPU utilization of compute1 is below the lower threshold and the underload migration is determined. All VMs on the node are migrated to other nodes within the threshold range. During the dynamic migration process, the VMs are migrated to compute2 and compute3. After the migration process, the load utilization of the other two nodes has increased and the CPU utilization of compute1 is close to 0 and the node is shut down to reduce the energy consumption of the cloud-based BBU pool. The CPU utilization of compute3 is below the lower threshold in  Due to the limited experimental conditions, there are only three compute nodes in the OpenStack environment but through the above experiments, it can be seen that the system and strategy designed in this paper are feasible. Combined with the simulation of the migration strategy used in the system in the third section, the effectiveness of the migration strategy proposed in this paper can be proved.

Conclusions
In the virtual BBU pool, it is necessary to reduce energy consumption without affecting the destruction of SLA. This paper proposes a dual threshold adaptive (DTA) dynamic migration strategy, including a triggered migration mechanism based on Kalman filter prediction, DTA-VM selection strategy and DTA-Target selection strategy, which can more accurately determine the resource utilization of the servers in the BBU pool and trigger the migration. In the strategy, the dynamic migration VM, from the overloaded host is selected based on the minimum number of migration and minimum migration time to avoid the increase of energy consumption caused by the multiple migrations of VM in the BBU pool. The DTA-Target selection strategy predicts whether the resource performance of the VM after the migration meets the threshold requirements, comprehensively considers the CPU and memory utilization and selects the target node to reduce the migration failure caused by the lack of physical resources of the node after the VM migration. Through the DTA migration strategy, adaptive dynamic migration can be realized to reduce energy consumption and SLA violations in the BBU pool. The experimental results show that the DTA In the initial stage of the experiment, continuous CPU and memory stress test scripts are executed for all the VMs on the nodes. As can be seen from the Figure 13, the resource (CPU and memory) utilization of the three compute nodes is in a steady state in the first 30 minutes. After 30 minutes, keep compute1 and compute2 unchanged, perform a more intense load pressure script on compute2 and the CPU and memory utilization of compute2 are on the rise. Both CPU and memory utilization of compute2 exceeds the upper threshold in about 20 minutes and it does not migrate immediately. Similarly, in about 125 minutes, the CPU utilization of compute1 exceeds the upper limit threshold but does not migrate immediately according to the prediction. This is because the resource utilization exceeds the upper threshold is temporary at these two times and the unnecessary migration operation is avoided based on the Kalman prediction algorithm. Starting from about 70 minutes, the CPU utilization of compute2 exceeded the upper threshold again. Combined with Kalman's prediction, the dynamic migration of VM is carried out in accordance with the requirements of the migration strategy. The load utilization of compute2 has gradually increased and the CPU utilization of compute2 exceeded the upper threshold again at around 70 minutes. Combined with the prediction model to determine that compute2 is an overload node, the dynamic migration of VM is carried out in accordance with the requirements of the migration strategy. At about 105 minutes, the memory utilization of compute2 exceeded the upper threshold and a dynamic migration was performed by Kalman's prediction. After the migration, the memory utilization rate of compute2 drops significantly and the load utilization of compute2 returns to within the normal threshold range.
Reducing the load pressure on compute2 in about 130 minutes, as can be seen in Figure 14, the load situation of the computing node gradually becomes stable. From about 160 minutes, the load pressure of compute2 is gradually increased again and both compute2's CPU and memory utilization are above the upper threshold at about 175 minutes. Through the prediction, a dynamic migration operation of the VM was performed. After the dynamic migration, the VM of compute2 migrates to other idle compute nodes. After the dynamic migration strategy selection, the VM of compute2 migrates to compute3 node. It can be seen that the resource utilization of compute2 drops below the upper threshold after the dynamic migration. Since the physical resources of the compute3 node are very abundant, the load utilization of this node after the migration of the virtual machine has increased slightly but it is not obvious. Figure 15 shows resource utilization of compute nodes based on lower threshold migration of OpenStack. In the experiment, the intensity of the load pressure script executed by the VMs on compute2 and compute3 is not changed and compute1 gradually reduces the load pressure intensity. From about 240 minutes, the CPU utilization of compute1 is below the lower threshold and the under-load migration is determined. All VMs on the node are migrated to other nodes within the threshold range. During the dynamic migration process, the VMs are migrated to compute2 and compute3. After the migration process, the load utilization of the other two nodes has increased and the CPU utilization of compute1 is close to 0 and the node is shut down to reduce the energy consumption of the cloud-based BBU pool. The CPU utilization of compute3 is below the lower threshold in approximately 242 minutes and the dynamic migration is not required by the Kalman prediction. The VMs on compute1 are migrated to compute2 and compute3 and the resource utilization of these two compute nodes increases.
Due to the limited experimental conditions, there are only three compute nodes in the OpenStack environment but through the above experiments, it can be seen that the system and strategy designed in this paper are feasible. Combined with the simulation of the migration strategy used in the system in the third section, the effectiveness of the migration strategy proposed in this paper can be proved.

Conclusions
In the virtual BBU pool, it is necessary to reduce energy consumption without affecting the destruction of SLA. This paper proposes a dual threshold adaptive (DTA) dynamic migration strategy, including a triggered migration mechanism based on Kalman filter prediction, DTA-VM selection strategy and DTA-Target selection strategy, which can more accurately determine the resource utilization of the servers in the BBU pool and trigger the migration. In the strategy, the dynamic migration VM, from the overloaded host is selected based on the minimum number of migration and minimum migration time to avoid the increase of energy consumption caused by the multiple migrations of VM in the BBU pool. The DTA-Target selection strategy predicts whether the resource performance of the VM after the migration meets the threshold requirements, comprehensively considers the CPU and memory utilization and selects the target node to reduce the migration failure caused by the lack of physical resources of the node after the VM migration. Through the DTA migration strategy, adaptive dynamic migration can be realized to reduce energy consumption and SLA violations in the BBU pool. The experimental results show that the DTA migration strategy proposed in this paper has better performance in the cloud center. The migration strategy is further realized in the BBU pool based on the OpenStack platform. Without manual intervention, the change of the workload can be automatically responded and the energy consumption is saved. In the future, the authors will do more in-depth research on the actual deployment of the integration of BBU functions with OpenStack.
Author Contributions: C.W. proposed the basic framework of the research scenario. In addition, C.W. was in charge of modeling the problem and proposed the migration strategy. Y.C. performed the simulations and wrote the paper. Z.Z. provided suggestions for platform building. W.W. gave some suggestions on the mathematical model and formula derivation. All authors have read and agreed to the published version of the manuscript.