A virtual machine with a conventional offloading scheme transmits and receives all context information to maintain program consistency during communication between local environments and the cloud server environment. Most overhead costs incurred during offloading are proportional to the size of the context information transmitted over the network. Therefore, the existing context information synchronization structure transmits context information that is not required for job execution when offloading, which increases the overhead costs of transmitting context information in low-performance Internet-of-Things (IoT) devices. In addition, the optimal offloading point should be determined by checking the server’s CPU usage and network quality. In this study, we propose a context management method and estimation method for CPU load using a hybrid deep neural network on a cloud-based offloading service that extracts contexts that require synchronization through static profiling and estimation. The proposed adaptive offloading method reduces network communication overheads and determines the optimal offloading time for low-computing-powered IoT devices and variable server performance. Using experiments, we verify that the proposed learning-based prediction method effectively estimates the CPU load model for IoT devices and can adaptively apply offloading according to the load of the server.
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited