Resource Allocation for Edge Computing without Using Cloud Center in Smart Home Environment: A Pricing Approach

Recently, more and more smart homes have become one of important parts of home infrastructure. However, most of the smart home applications are not interconnected and remain isolated. They use the cloud center as the control platform, which increases the risk of link congestion and data security. Thus, in the future, smart homes based on edge computing without using cloud center become an important research area. In this paper, we assume that all applications in a smart home environment are composed of edge nodes and users. In order to maximize the utility of users, we assume that all users and edge nodes are placed in a market and formulate a pricing resource allocation model with utility maximization. We apply the Lagrangian method to analyze the model, so an edge node (provider in the market) allocates its resources to a user (customer in the market) based on the prices of resources and the utility related to the preference of users. To obtain the optimal resource allocation, we propose a pricing-based resource allocation algorithm by using low-pass filtering scheme and conform that the proposed algorithm can achieve an optimum within reasonable convergence times through some numerical examples.


Introduction
With the emergence and rapid development of the Internet of Things (IoT), Internet technology is moving towards the direction of "intelligence of everything" on the basis of "Internet of everything (IOE)" [1]. People's quality of life is steadily improving and they are also looking for more convenient and comfortable services due to technological progress. Currently, smart home applications represented by smart speakers, sweeping robots and smart air conditioners are becoming an indispensable part in many users' lives [2]. In China, the scale of smart home market is growing at a rate of 20% to 30% per year. According to Prospective Industrial Research Institute, China's smart home market is expected to reach ¥436.9 million in 2021 [3]. A mushrooming number of smart home applications are entering into millions of homes, from smart washing machines to smart vacuum cleaners, from smart door locks to telemedicine, all of them reflect the charm and potential of technology. The new technology of smart homes and Internet of Things would meet people's pursuit of "livable, comfortable, convenient and safe" living environment [4].
The Smart Grid (SG) is considered as an imminent future power network due to its fault identification and self-healing capabilities. As one of the most significant terminal units in the smart grid, in the future, it is inevitable that the deep integration of smart home and SG becomes a development trend whose direction is mainly reflected in three aspects. Firstly, in terms of smart home energy production and utilization, the openness of SG determines that clients have the dual

Related Work
With the economic boom and technical advancement, an increasing number of smart devices find their way into people's daily life, which have also received a lot of research attention from various communities like researchers, business and government. Lee et al. [24] designed a two-level Deep Reinforcement Learning (DRL) framework for optimal energy management of smart homes. In the proposed simulation, two agents (an air conditioner and a washing machine) interact with each other to schedule the optimal home energy consumption efficiently. Al-Ali et al. [25] presented an energy management system for smart homes and better meeting the needs of clients through big data analytics and business intelligence. Wang et al. [26] proposed a new task scheduling approach to fulfil the requirements of smart homes and healthcare regarding tight set situations of the elderly or patients, using edge computing-themed processing schemes with a focus on real-time issues. Procopiou et al. [27] proposed a lightweight detection algorithm for IoT devices based on chaos prediction and chaos theory -Chaos Algorithm (CA) which is used for the identification of Flooding and Distributed Denial of Service (DDoS) attacks to provide a secure network environment for IoTbased smart home systems. Yang et al. [28] proposed an IoT-based smart home security monitoring system that can enhance the security performance of system by improving the False Positive Rate (FPR) and reducing the network latency over the traditional security system. At present, for the problem of resource allocation for edge computing without using cloud center, there are two main types of research approach. The first one is from the user's perspective, Guo et al. [22] illustrated that it is an important method for achieving optimal resource allocation to filter out appropriate edge nodes to perform computational migration. The authors assumed that all users are In this paper, we apply the resource pricing and the utility theory in economics to investigate the resource allocation and scheduling of edge computing without using cloud center for smart home environment. In order to make the modeling process simpler, we take bandwidth resources as an example to investigate the resource allocation of edge computing for smart homes. Although it is at present that the available bandwidth exceeds significantly the local application demands, we think that it is worth discussing the bandwidth allocation in smart home environment in the future. The reasons why we believe it are as follows. On the one hand, at the supply side, the last decade has witnessed an explosion of data traffic over the network attributed to the development of mobile communication. With the advent and extension of fifth-generation (5G) mobile networks, it is inevitable to emerge the situation of the exponential rise in data-demand and increasing demand in bandwidth. On the other hand, at the demand side, with the improvement of people's quality of life and soaring demand for more convenient and comfortable services, it may appear in our life that a variety of IoT applications and smart homes including 4K/8K Ultra High Definition (UHD) video and virtual/augmented reality(VR/AR) require low-latency and high-bandwidth. We also divide the tasks generated by smart home applications into two types: the tasks with low requirement on latency and high requirement on processing capacity, and the tasks with high requirement on latency and low requirement on processing capacity. For the first type of tasks, we introduce the utility function Sensors 2020, 20, 6545 4 of 28 aiming at optimizing resource transmission delay, while for the second type, we introduce the utility function aiming at optimizing resource transmission volume. Different utility functions are chosen to describe and maximize the characteristics of users when dealing with these two types of tasks. Based on the opinion of maximizing utility and the rules of market equilibrium, we propose the utility maximization model of resource allocation for different tasks of edge computing without using cloud center. Subsequently, we analyze the optimal resource allocation for the two types of tasks and obtain the optimum for each one. Then, we design the resource allocation algorithm by using the Lagrangian method and low-pass filtering theory which is assisted to eliminate oscillations and increase convergence speed. Finally, some numerical examples are given to verify the performance of the algorithm for two types of tasks.

Related Work
With the economic boom and technical advancement, an increasing number of smart devices find their way into people's daily life, which have also received a lot of research attention from various communities like researchers, business and government. Lee et al. [24] designed a two-level Deep Reinforcement Learning (DRL) framework for optimal energy management of smart homes. In the proposed simulation, two agents (an air conditioner and a washing machine) interact with each other to schedule the optimal home energy consumption efficiently. Al-Ali et al. [25] presented an energy management system for smart homes and better meeting the needs of clients through big data analytics and business intelligence. Wang et al. [26] proposed a new task scheduling approach to fulfil the requirements of smart homes and healthcare regarding tight set situations of the elderly or patients, using edge computing-themed processing schemes with a focus on real-time issues. Procopiou et al. [27] proposed a lightweight detection algorithm for IoT devices based on chaos prediction and chaos theory-Chaos Algorithm (CA) which is used for the identification of Flooding and Distributed Denial of Service (DDoS) attacks to provide a secure network environment for IoT-based smart home systems. Yang et al. [28] proposed an IoT-based smart home security monitoring system that can enhance the security performance of system by improving the False Positive Rate (FPR) and reducing the network latency over the traditional security system. At present, for the problem of resource allocation for edge computing without using cloud center, there are two main types of research approach. The first one is from the user's perspective, Guo et al. [22] illustrated that it is an important method for achieving optimal resource allocation to filter out appropriate edge nodes to perform computational migration. The authors assumed that all users are within range of edge computing hotspots and have access to multiple edge servers. They proposed an optimal task scheduling strategy based on a discrete Markov decision process (MDP) framework to achieve optimal allocation of tasks and resources. Meanwhile, they also presented an index-based allocation strategy to reduce the computation complexity and communication overheads associated with the implementation of this policy. Finally, each user can find the most suitable edge server to minimize energy consumption and latency. The second one is from the edge node's perspective, Queis et al. [23] analyzed the impact of cluster size (i.e., the number of edge nodes performing computational tasks) on service latency and energy consumption of edge nodes. Their research showed that increasing the number of edge nodes does not always reduce execution latency; If the transmission latency is longer than the computational latency at the edge nodes, the overall service latency may increase. Therefore, proper construction methods for edge node cluster and node selection methods play a crucial role in system performance. Queis et al. [29] proposed three different clustering strategies regarding optimization of service latency, overall energy consumption of the clusters and energy consumption of the nodes of the clusters in order to validate the impact of different clustering strategies on edge node cluster characteristics (size, latency, and energy consumption). Li et al. [30] proposed a self-similarity-based load balancing (SSLB) mechanism for large-scale fog computing which focuses on the applications processed on fog infrastructure. They also presented an adaptive threshold policy and corresponding scheduling algorithm, which guarantees the efficiency of SSLB. Queis et al. [31] Sensors 2020, 20, 6545 5 of 28 designed a two-step method for implementing user task scheduling and resource allocation of nodes. The first step is local resource allocation, where each edge node allocates its resources to nearby users according to specific scheduling rules; The second step is the creation of a cluster of edge nodes for users that were not allocated compute resources in the first step. Sun et al. [32] proposed a two-level resource scheduling model, which includes resource scheduling among various fog clusters and resource scheduling among fog nodes in the same fog cluster. Besides, they believed that the edge layer and the terminal layer partially intersect because the mobile terminal devices in these intersections are not only fog resource requesters but also resource providers.
Zhou et al. [33] proposed a cloud resource scheduling algorithm based on Markov prediction model which is put forward to solve the problem of task scheduling and load balancing in cloud service node failure situation, including the judgment of node load degree, the selection of migrated task and nodes, and the decision of migration routing. The goal is to achieve rapid cloud service recovery and to improve the reliability of cloud services. Zheng et al. [34] proposed an energy consumption optimization problem that considers latency performance. The model includes energy consumption and latency in the process of execution and transmission of local devices, fog nodes and cloud centers. Liu et al. [11] presented a joint model including consumption, latency, and payment costs in fog computing heterogeneous networks to solve a multi-objective optimization problem between nodes and users by using queueing theory and operations research. Wang et al. [35] designed an unloading system in real-time traffic management based on fog node-based IoV (Internet of Vehicle) system, which expands cloud computing by using parking and moving vehicles as fog nodes.
In the research of resource allocation in the network, Li et al. [36] applied the hybrid cloud resource optimization model to the mobile client, and used a two-stage optimization process to provide a scheme for resource allocation of mobile client in mobile cloud environment. The first stage emphasizes that mobile cloud users achieve utility optimization under cost and energy constraints, and the second stage is that mobile cloud providers run multiple servers within the energy loss limit to perform the work of mobile cloud users. Li et al. [37] proposed a resource allocation model for elastic services based on utility optimization with the objective of maximizing network utility from the perspective of users' utility optimization. In allocating bandwidth resources for migrating enterprise applications to the cloud, Li et al. [38] set the objective function to the problem of transmission time optimization, which makes the utility and satisfaction of enterprise users increase by minimizing cloud migration time. Nguyen et al. [39] elaborated a price-based edge computing method of resource allocation, which mainly expounds the multiple competitive service processes from heterogeneous edge nodes with limited ability to network edge under the efficient resource allocation framework of the market. In the process of explanation, the framework proved that the balanced allocation of resources realized Pareto optimization and met the fairness, proportionality and sharing incentives of allocation expectations.

Resource Allocation Model of Edge Computing without Using Cloud Center for Smart Homes
In a smart home environment, we assume that all smart home applications are composed of edge nodes and users, which mainly depends on whether their capacities meet task requirements. The tasks generated by the users in edge computing without using cloud center can only be solved at the edge nodes. Although the edge nodes are closer to the users in the topological position, when it comes to process different types of tasks, especially the tasks with large amount of calculation and storage, this paradigm has a massive number of limitations. In fact, there are large amounts of tasks that may appear in edge computing. We assume that these tasks of smart homes environment can be divided into two types: one is corresponding to the tasks with low requirement on latency and high requirement on processing capacity, and the other is associated with the tasks with high requirement on latency and low requirement on processing capacity. In this section, inspired by the concept of utility in economics, we formulate the resource allocation model of edge computing for smart home environment without using cloud center and adopt various utility strategies to process properly different types of tasks.

Model Description
In the traditional edge computing structure, the remote cloud center, edge nodes and users form a close relationship through the networks. However, considering the data security and privacy of smart homes, avoiding the hosting costs incurred by uploading tasks to the cloud center and reducing the latency of smart home applications, we design a new edge computing structure for the smart home that does not require the involvement of cloud center. As shown in Figure 2, the user represents a smart home application whose capacity does not meet the task requirements and the edge node is a smart home application at the bottom of network structure that has spare computing power and provides the compute and storage resources for its surrounding users. The arrow direction represents the tasks generated by users that are processed by nearby edge nodes. Their spare computing power eliminates the need to transmit data to the cloud center and decentralizes processing power to ensure real-time processing with reduced latency, while diminishing bandwidth and network storage requirements. In fact, many smart home applications both act as users sending task requests to surrounding edge nodes and as edge nodes receiving and resolving tasks from surrounding users in a smart home environment. For example, a mobile phone can act as an edge node between a smart bracelet and a cloud center, processing service requests from the bracelet about exercise and health monitoring, and as a user generating tasks about games, videos, social networks, et al. However, the tasks generated by users can only be processed by the adjacent edge nodes in a smart home environment due to the lack of cloud center. For the purpose of processing various tasks in different strategies properly, the tasks generated by users or received by edge nodes are divided into two types. As shown in Figure 2, the red tasks represent the tasks with high requirement on latency and low requirement on processing capacity (For example, game commands in phone, shopping online by Taobao VR) and the green tasks represent the tasks with low requirement on latency and high requirement on processing capacity (For example, watching HD video on Smart TV). By adopting different processing strategies for the two kinds of tasks, the model can take advantage of the relationship of edge nodes and users to solve resource allocation in edge computing without the participation of cloud center. requirement on processing capacity, and the other is associated with the tasks with high requirement on latency and low requirement on processing capacity. In this section, inspired by the concept of utility in economics, we formulate the resource allocation model of edge computing for smart home environment without using cloud center and adopt various utility strategies to process properly different types of tasks.

Model Description
In the traditional edge computing structure, the remote cloud center, edge nodes and users form a close relationship through the networks. However, considering the data security and privacy of smart homes, avoiding the hosting costs incurred by uploading tasks to the cloud center and reducing the latency of smart home applications, we design a new edge computing structure for the smart home that does not require the involvement of cloud center. As shown in Figure 2, the user represents a smart home application whose capacity does not meet the task requirements and the edge node is a smart home application at the bottom of network structure that has spare computing power and provides the compute and storage resources for its surrounding users. The arrow direction represents the tasks generated by users that are processed by nearby edge nodes. Their spare computing power eliminates the need to transmit data to the cloud center and decentralizes processing power to ensure real-time processing with reduced latency, while diminishing bandwidth and network storage requirements. In fact, many smart home applications both act as users sending task requests to surrounding edge nodes and as edge nodes receiving and resolving tasks from surrounding users in a smart home environment. For example, a mobile phone can act as an edge node between a smart bracelet and a cloud center, processing service requests from the bracelet about exercise and health monitoring, and as a user generating tasks about games, videos, social networks, et al. However, the tasks generated by users can only be processed by the adjacent edge nodes in a smart home environment due to the lack of cloud center. For the purpose of processing various tasks in different strategies properly, the tasks generated by users or received by edge nodes are divided into two types. As shown in Figure 2, the red tasks represent the tasks with high requirement on latency and low requirement on processing capacity (For example, game commands in phone, shopping online by Taobao VR) and the green tasks represent the tasks with low requirement on latency and high requirement on processing capacity (For example, watching HD video on Smart TV). By adopting different processing strategies for the two kinds of tasks, the model can take advantage of the relationship of edge nodes and users to solve resource allocation in edge computing without the participation of cloud center.

Model Formulation
Due to security and privacy requirements, it is better to divide the entire network into multiple non-connected regions. In this paper, we focus on a smart home environment which should be regarded as an independent region to avoid the network intrusion and attack. Thus, we assume that all smart home applications are in a home (namely an independent region) and are not connected with cloud center, then they are considered in a resource allocation market. In the market, all nodes (smart home applications) are divided into customers (users) and resource providers (edge nodes) according to the contents of the previous section. In the following description, we use "resource provider/provider" instead of edge node and "resource consumer/consumer" instead of user. We take bandwidth resources as an example to intuitively analyze the resource allocation of edge computing in a smart home environment. Each customer sends a task processing request to one or more resource providers. Each resource provider receives requests from one or several resource customers and provides them with an appropriate amount of resources. In order to differentiate between resource providing nodes and resource requesting nodes in the resource allocation model and distinguish resource allocation requirements for different types of tasks, we introduce the set P of resource providers, the set R of resource customers for the tasks with high requirement on latency and low requirement on processing capacity and the set S of resource customers for the tasks with low requirement on latency and high requirement on processing capacity. A node is a member of P, R or S if it provides or requests at least one content of the two kinds of tasks, respectively. Of course, one node can be a member of all sets. Let P(r) or P(s) be the set of providers offering resources for customers r ∈ R or s ∈ S. Thus, if a provider p ∈ P offers resources for customer r ∈ R or s ∈ S, then p ∈ P(r) or p ∈ P(s). Also, let S(p) or R(p) be the set of customers that request resources from provider p for the two types of tasks, Thus, if customer s or r requests resources from provider p, then s ∈ S(p) or r ∈ R(p). Note that only when s ∈ S(p), r ∈ R(p), then p ∈ P(s), p ∈ P(r).
Assume that the resource allocation between providers and customers in a smart home environment can be viewed as offering a certain amount of requested bandwidth. Each provider receives requests from customers and needs to serve the customers by providing with a certain amount of bandwidth. When processing a task with low requirement on latency and high requirement on processing capacity, denote the resources granted by provider p to customer s as x ps , thus the aggregate resources granted by providers to customer s is y s = p:p∈P(s) x ps and the resources provided by provider p to all its customers is z L p = s:s∈S(p) x ps , the utility generated by the customer s is U s (y s ). Similarly, when processing a task with high requirement on latency and low requirement on processing capacity, denote the resources granted by provider p to customer r as x pr , thus the aggregate resources granted by providers to customer r is y r = p:p∈P(r) x pr and the resources provided by provider p to all its customers is z H p = r:r∈R(p) x pr , the utility generated by the customer r is U r (y r ). Meanwhile, the resource allocation between providers and customers is subject to their capacity. Denote C d r , C d s as the capacity of bandwidth resources of the downstream links of customers r and s, respectively. Denote C uH p , C uL p as the capacity of reserved bandwidth resources of the upstream link of provider p for the tasks with high requirement on latency and the tasks with low requirement on latency, which are discussed in previous part. Therefore, for each provider p, we can obtain that z L p ≤ C uL p , which means that the total granted resources for customers requesting the tasks with low requirement on latency and high requirement on processing capacity do not exceed the reserved upstream link capacity C uL p . For each provider p, we also obtain that z H p ≤ C uH p , which means that the total granted resource for customers requesting the tasks with high requirement on latency and low requirement on processing capacity do not exceed the reserved upstream link capacity C uH p . For each customer s, y s ≤ C d s , for each customer r, y r ≤ C d r , which mean that the total obtained bandwidth resources do not exceed the downstream link capacity of each customer. This paper considers the aggregated utility of all resource customers, which can avoid deviations caused by different resource satisfaction utility of individual customers. Then the optimization problem of the edge computing resource allocation model without using cloud center can be modelled as following. x ps = y s , ∀s ∈ S p:p∈P(r) x pr = y r , ∀r ∈ R p:p∈P(s) x ps ≤ C d s , ∀s ∈ S p:p∈P(r) x pr ≤ C d r , ∀r ∈ R s:s∈S(p) x ps ≤ C uL p r:r∈R(p) The following utility functions we use in this paper were described in the previous literatures. The Equation (2) is used to achieve a distinguished variant of utility-based fairness, which has drawn a great deal of attention and interest in recent years [40][41][42][43]. The Equation (3) is a variant of fairness, which can be reduced to the minimization of time delay and has also been receiving increasing attention [44,45].
In the above Equation (2), ω s refers to the willingness to pay (WTP) of customer s for processing the tasks with low requirement on latency and high requirement on processing capacity [46,47]. In the above Equation (3). M r refers to the amount of task for customer r. T r refers to the time limit for customer r. B r refers to the minimum bandwidth requirement for customer r during resource allocation.
When y r ≤ B r , T r − M r y r × sgn(y r −B r )+1 2 = 0, and when y r > B r , T r − M r y r × sgn(y r −B r )+1 2 0 [48]. The notations used in this paper are summarized in Table 1.

Notations Meanings
S the set of resource customers (users) for the tasks with low requirement on latency and high requirement on processing capacity, each element is customer s R the set of resource customers (users) for the tasks with high requirement on latency and low requirement on processing capacity, each element is customer r P the set of resource providers (edge nodes), each element is provider p P(s) the set of providers offering resources for customers s ∈ S P(r) the set of providers offering resources for customers r ∈ R S(p) the set of customers S that request resource from provider p R(p) the set of customers R that request resource from provider p x ps the resources granted by provider p to customer s x pr the resources granted by provider p to customer r y s the aggregate resources granted by providers to customer s y s = p:p∈P(s) x ps y r the aggregate resources granted by providers to customer r y r = p:p∈P(r) x pr z L p the resources provided by provider p to all its customers, z L p = s:s∈S(p) x ps z H p the resources provided by provider p to all its customers, z H p = r:r∈R(p) x pr

Notations Meanings
U s (y s ) the utility generated by the customer s U r (y r ) the utility generated by the customer r

Model Analysis
In the previous section, we have established mathematical model of edge computing without using cloud center for resource allocation in a smart home environment. In this section, we analyze the resource allocation model and regard it as the original problem of the resource allocation model, then we give the Lagrangian function of this nonlinear optimization problem. x pr − y r + r:r∈R where λ is the price vector of elements λ s , λ r ≥ 0, which can be considered as the price per unit bandwidth paid by customer s and r, respectively; µ is the price vector of elements µ L p , µ H p ≥ 0, which can be considered as the price per unit bandwidth charged by the upload link of provider p for the two types of tasks; ν is the price vector of elements ν s , ν r ≥ 0, which can be considered as the price paid by the download link of customer s and r, respectively. δ 2 is regarded as the slack variable.
Sensors 2020, 20, 6545 10 of 28 The first two terms in the above equation take y s and y r as independent variables, and the third and fourth terms take x ps and x pr as independent variables. The objective function of the dual problem can be written as From the above Equation (6) we can derive Then the optimal resource allocation for tasks with low requirement on latency and high requirement on processing capacity can be expressed as where p:p∈P(s) x ps (λ s ) = y * s (λ s ). The optimal resource allocation for tasks with high requirement on latency and low requirement on processing capacity can be expressed as where p:p∈P(r) x pr (λ r ) = y * r (λ r ). From the perspective of resource customer, customers s and r are trying to maximize their own utilities, which are determined by the allocation bandwidth resource y s , y r they obtain. The customers s and r must pay for bandwidth resource they use, λ s y s , λ r y r are the cost that customers s and r are willing to pay, respectively. ν s , ν r are the prices charged by the download links of customers s and r, so x ps ν s , x pr ν r are the costs charged by the download links of customers s and r. From the perspective of resource provider, each provider p is trying to maximize its revenue. µ L p , µ H p are the prices per unit of bandwidth charged by the upload link of provider p for the two types of different tasks, so x ps µ H p , x pr µ L p are the costs charged by the upload link of provider p for the two types of tasks. Thus, the dual problem of resource allocation model is The objective of the dual problem is to minimize the total cost of transmission for all nodes while ensuring a certain level of satisfaction for customers. Theorem 1. The utility function is concave with respect to the variables for both types of tasks, so the optimal resource allocation for each customer and the optimal objective value of the original problem can be obtained, and the optimal objective value of the original problem equals to the optimal objective value of the dual problem, i.e., U * = D * , but the optimal resource allocation provided by each provider, i.e., x * ps , x * pr , is not unique.
Proof of Theorem 1. From the convex optimization theory [49], since the objective function s:s∈S U s (y s ) + r:r∈R U r (y r ) for both types of tasks is concave with respect to its variables and the set of constraints in the resource allocation model is convex, thus the resource allocation model is a convex optimization problem. Then the optimal resource allocation can be obtained by applying the convex optimization approach. The optimal total resource allocation for each customer, i.e., y s , y r , is unique. But the optimal resource provided by each provider, i.e., x * ps , x * pr may not be unique. The result is obtained.

Theorem 2.
When the optimum of the resource allocation model is achieved, if two providers simultaneously provide resource to a customer, then the prices charged by these two providers are equal. This result also holds when considering resource customers for other different types of tasks.
Proof of Theorem 2. 1. If customers receive tasks with low requirement on latency and high requirement on processing capacity, in the case where the optimum of the resource allocation model is achieved, we can take advantage of the Karush-Kuhn-Tucker (KKT) conditions to obtain that The above equation is a necessary condition for the existence of an optimal solution to the resource allocation problem. For two providers that offer resources to the same customer, e.g., p 1 , p 2 ∈ P(s), the optimal prices charged by these two providers are equal, i.e., If the customers receive tasks with high requirement on latency and low requirement on processing capacity, in the case where the optimum of the resource allocation model is achieved, we can take advantage of the KKT conditions to obtain that The above equation is a necessary condition for the existence of an optimal solution to the resource allocation problem. For two providers that provide resources to the same customer, e.g., p 1 , p 2 ∈ P(r), the optimal prices charged by these two providers are equal, i.e., The result is obtained.

Optimal Resource Allocation for Smart Homes
To maximize the aggregated utility of the resource allocation model in the smart home environment, we need to maximize the aggregated utility of the model for processing the tasks with high requirement on latency and low requirement on processing capacity and the aggregated utility of the model for processing the tasks with low requirement on latency and high requirement on processing capacity, respectively. The upload link of resource provider is often regarded as scarce resource and the corresponding constraints in the model are always active constraints. In this section, we assume that only the constraints of providers when uploading resources are considered in the process of analyzing optimal resource allocation. However, it is also critical that the amount of bandwidth resource received does not exceed the limit of the download link. In the Lagrangian function, δ 2 refers to the remaining bandwidth resource of the participant in the resource allocation. Obviously, according to the KKT conditions when δ 2 = 0, the constraint on the bandwidth resource of participant becomes active. When δ 2 > 0, the constraint on the bandwidth resource of participant becomes non-active. In the following analysis, we assume that δ 2 = 0, which means the remaining resources of provider and the spare capacity of download link are gone. Otherwise, the non-active constraints can be omitted and only active constraints are considered [50].
The utility functions are substituted into the resource allocation model for analyzing respectively. The model for processing the tasks with low requirement on latency and high requirement on processing capacity is x ps = y s , ∀s ∈ S p:p∈P(s) x ps ≤ C d s , ∀s ∈ S s:s∈S(p) x ps ≤ C uL p ∀p ∈ P x ps ≥ 0, s ∈ S, r ∈ R, p ∈ P The model for processing the tasks with high requirement on latency and low requirement on processing capacity is x pr = y r ∀r ∈ R p:p∈P(r) x pr ≤ C d r , ∀r ∈ R r:r∈R(p) x pr ≤ C uH p , ∀p ∈ P x pr ≥ 0, s ∈ S, r ∈ R, p ∈ P

The Optimal Resource Allocation for Customer s
We first don't consider the download bandwidth constraint (i.e., the first inequality constraint) in the resource allocation model (16), and obtain the following Lagrangian function: The Lagrangian function can be rewritten as Sensors 2020, 20, 6545 13 of 28 According to Equation (7), there is an optimal allocation Then, substituting (20) into (19), we obtain Let ∂L 1 (x; λ, µ)/∂λ s = 0, then we obtain the optimal price paid by customer s − ω s λ s + 1 + p:p∈P(s) Substitute the obtained result (22) into function (19) again Let ∂L 1 (x; ν)/∂x ps = 0, then we obtain the optimal price charged by provider p ω s 1 + p:p∈P(s) x ps − µ ps = 0 µ L * p = ω s 1 + p:p∈P(s) x ps (24) Therefore, when the download bandwidth constraint of each customer is not considered, we obtain µ L * p 1 = µ L * p 2 = λ * s , p 1 , p 2 ∈ P(s), s : s ∈ S. In this case, the price paid by the resource customer s for per unit bandwidth is equal to the price charged by the provider p for per unit bandwidth.
We assume that the entire network is divided into τ non-connected regions, where each region is composed of a sub-set of all nodes. According to Theorem 2, the optimal prices charged by resource providers in each region are equal, and they are all equal to the price paid by any customer s, µ L * p = µ τ , λ * s = λ τ , µ τ = λ τ , where s : s ∈ S τ . S τ refers to the collection of all resource consumers in the region. Particularly, when τ = 1, then P τ = P, µ τ = µ , S τ = S. Therefore, the transformed Lagrangian function can be expressed as where µ L * p = µ τ , ∀ p ∈ P τ . Let dL 1 (µ τ )/dµ τ = 0, then Substituting the above result (26) into (24) |S τ | indicates the number of customers who need resources in sub-region τ.
Recall that the optimal resource allocation of customer s is also subject to its download bandwidth capacity C d s , thus the optimal resource allocation y * s has the following value

The Optimal Resource Allocation for Customer r
Following the analysis method for resource allocation (16), we obtain the Lagrangian function for resource allocation (17) as follows.
The Lagrangian function can be rewritten as According to Equation (8), there is an optimal allocation Let ∂L 2 (x; λ, µ)/∂λ r = 0, then we obtain the optimal price paid by user r.
Let ∂L 2 (x; µ)/∂x pr = 0, then we obtain the optimal price charged by edge node p On the basis of the above discussion, it can be concluded that when the optimal resource allocation is achieved, the following condition holds: µ H * p 1 = µ H * p 1 = λ * r , p 1 , p 2 ∈ P(s), r : r ∈ R. We assume that the entire network is divided into γ non-connected regions, where each region corresponds to a part of nodes. According to Theorem 2, the optimal prices charged by resource providers in each region are equal, and they are all equal to the price paid by any customer r, µ H * p = µ γ , λ * r = λ γ , µ γ = λ γ , where r : r ∈ R γ , p : p ∈ P γ . R γ refers to the collection of all resource consumers r in the region. Particularly, when γ = 1, then R γ = R, µ γ = µ H p , P γ = P. Therefore, the transformed Lagrangian function can be expressed as x pr µ γ + p:p∈P where µ H * p = µ γ , r : r ∈ S γ . Let dL 2 x; µ pr /dµ γ = 0, then (38) Comparing (33) and (36), and substituting µ γ into the above Equation (33)  Recall that the optimal resource allocation of customer r is also subject to its download bandwidth capacity C d r , thus the optimal resource allocation y * r has the following value

Algorithm Introduction
In order to obtain the optimal resource allocation of edge computing without using cloud center in a smart home environment, this section will introduce a distributed algorithm that only depends on local information, which is in line with smart home. Since the objective of the resource allocation model is concave but not strictly concave, which results in the optimal resource allocation is usually not unique, so that the proposed algorithm may be oscillation. Thus, we introduce and apply the low-pass filtering method to eliminate the possible oscillation and improve the convergence speed [51].
We regard x ps (t), x pr (t) as the optimal estimation of the x ps (t), x pr (t) and summarize the detailed algorithm steps as follows 1.

3.
Each resource provider p (edge node) updates its prices µ L p (t), µ H p (t) by using the following method.
where β s , β r > 0 are the small step sizes.

Basic Steps
According to the above-mentioned low-pass filtering approach, the augmented variables x ps (t), x pr (t) can only eliminate the possible oscillation of the algorithm and improve the convergence speed, but it will not change the final optimum of the algorithm. At the optimal resource allocation, we obtain that x * ps (t) = x * ps (t), x * pr (t) = x * pr (t). The iterative step size selected in the algorithm has a great influence on the convergence speed. The step size selected in the algorithm should be small enough to ensure convergence, but not too small, which causes the convergence speed to become too slow, so it is necessary to choose a suitable iteration step size. The algorithm can be iterated to the optimal resource allocation within reasonable convergence times.
The basic steps of the algorithm iteration procedure are described as following. STEP 1: Initialize the variables and parameters. We need to initialize the iteration step sizes κ ps , κ pr , α s , α r , β s , β r and initialize the bandwidth resource allocation x ps (t), x pr (t) from provider p to customers s and r at time t. STEP 2: Calculate the prices paid by customers s and r at time t. Customers s and r calculate the offered bandwidth resources y s (t), y r (t) that they receive based on the amount of bandwidth resources provided by providers at time t, and then calculate the prices they should pay for provider p, respectively. STEP 3: Calculate the prices charged by the upload and download links of each node at time t. Provider p updates the amount of resources it provides for customers s and r at time t, z L p (t) and z H p (t). At the same time, provider p calculates the price µ p (t + 1) charged by the uploading link. Customers s and r update the prices ν s (t + 1), ν r (t + 1) charged by their downloading links. STEP 4: Update the bandwidth allocation of provider p at time t. Provider p updates its resource allocation x ps (t + 1), x pr (t + 1) for customers s and r at time t. STEP 5: Set the stop condition. The optimization problem considered in this paper is a convex optimization problem, which means that the optimal solution exists and is also the global optimal solution [52]. Therefore, when the algorithm reaches an equilibrium, x ps (t + 1) = x ps (t), x pr (t + 1) = x pr (t), the iteration of the algorithm can be suspended and the optimal bandwidth resource allocation is obtained.
In each iteration, the two types of resource customers s and r calculate the prices they should pay to providers and the prices charged by downloading links according to the amount of resources provided by providers. Each provider p calculates the price charged by its upload link and updates the resource allocation for its customers. Therefore, we can repeat the iterative process until the optimum is finally reached.

Simulation and Numerical Examples in a Smart Home Environment
In this section, we will build a small-scale smart home environment in an edge computing resource allocation scenario without using cloud center and study the performance of resource allocation schemes. As shown in Figure 3, assume that there are 9 nodes (smart home applications) in the smart home environment, where 3 nodes on the left which serve as resource providers provide bandwidth resource for the other 6 nodes on the right which serve as customers. The resource providers and customers form a resource allocation market and are free to exchange resources in the marketplace.

Processing the Tasks for Customer
Assume that the utility functions of 6 customers are: ( ) = 25 log( + 1) , ( ) = 20 log( + 1) , ( ) = 15 log( + 1) , ( ) = 10 log( + 1) , ( ) = 5 log( + 1) , ( ) = 4 log( + 1) . Assume that the upstream capacity of providers for these customers is The prices paid by the 6 resource customers are shown in Figure 4a, we find that the prices are finally driven to be an equilibrium, that is, * = * = * = * = * = * . The prices charged by the 3 providers for upload links are , , , and at the optimum * = * = * . After a certain number of iterations, the prices charged by providers for upload links and the prices paid by customers are equal, namely * = * , ∈ ( ). Figure 4b shows the utility of resource customers , , , , , , as well as the aggregated utility = + + + + + . Obviously, the algorithm can drive the customer's utility to reach the optimal value within a limited number of iterations.

Processing the Tasks for Customer s
Assume that the utility functions of 6 customers are: U 1 (y 1 ) = 25 log(y 1 + 1), U 2 (y 2 ) = 20 log(y 2 + 1), U 3 (y 3 ) = 15 log(y 3 + 1), U 4 (y 4 ) = 10 log(y 4 + 1), U 5 (y 5 ) = 5 log(y 5 + 1), U 6 (y 6 ) = 4 log(y 6 + 1). Assume that the upstream capacity of providers for these customers is C uL p = C uL 1 , C uL 2 , C uL 3 = (15, 12, 10)Mb/s and the downstream capacity of customers is C d 14,10,8,6,4 )Mb/s and step sizes are κ ps = 0.5, α s = 0.2, β s = 0.2, θ ps = 0.2. The initial rates of the resource allocation are set to be x ps = 1Mb/s. The prices paid by the 6 resource customers are shown in Figure 4a, we find that the prices are finally driven to be an equilibrium, that is, The prices charged by the 3 providers for upload links are µ L 1 , µ L 2 , µ L 3 , and at the optimum µ L * 1 = µ L * 2 = µ L * 3 . After a certain number of iterations, the prices charged by providers for upload links and the prices paid by customers are equal, namely λ * s = µ L * p , p ∈ P(s). Figure 4b shows the utility of resource customers U 1 , U 2 , U 3 , U 4 , U 5 , U 6 , as well as the aggregated utility TU = U 1 + U 2 + U 3 + U 4 + U 5 + U 6 . Obviously, the algorithm can drive the customer's utility to reach the optimal value within a limited number of iterations.
The optimal resource allocation for each customer is also shown in Figure 5, where x ps represents the amount of bandwidth resource provided by provider p to customer s, y s is the amount of bandwidth resource received by each customer s. For example, in the "Resources for customer 1" of Figure 5, x 11 , x 21 , x 31 respectively represent the amount of resource provided by providers p = 1, p = 2, p = 3 to customer s = 1 and y 1 represents the total amount of bandwidth resource obtained by the customer s = 1. We can find that the optimal resource allocation can be achieved within a certain number of iterations.
The prices paid by the 6 resource customers are shown in Figure 4a, we find that the prices are finally driven to be an equilibrium, that is, * = * = * = * = * = * . The prices charged by the 3 providers for upload links are , , , and at the optimum * = * = * . After a certain number of iterations, the prices charged by providers for upload links and the prices paid by customers are equal, namely * = * , ∈ ( ). Figure 4b shows the utility of resource customers , , , , , , as well as the aggregated utility = + + + + + . Obviously, the algorithm can drive the customer's utility to reach the optimal value within a limited number of iterations.
(a) (b)  The optimal resource allocation for each customer is also shown in Figure 5, where represents the amount of bandwidth resource provided by provider to customer , is the amount of bandwidth resource received by each customer . For example, in the "Resources for customer 1" of Figure 5, , , respectively represent the amount of resource provided by providers = 1, = 2, = 3 to customer = 1 and represents the total amount of bandwidth resource obtained by the customer = 1. We can find that the optimal resource allocation can be achieved within a certain number of iterations.
Linear Interactive and General Optimizer (LINGO) is a nonlinear programming software which can be used to solve nonlinear programming problems, some linear and nonlinear equations, and has become one of the best choices for solving optimization models with its powerful functions. As shown in Table 2, we list the simulation results of the proposed algorithm and the optimal solution of the optimization problem by applying LINGO. It is not difficult to find that the total amount of optimal bandwidth resource received by each resource customer is unique, but the optimal bandwidth resource obtained from each resource provider is not unique. This can be understood from the fact that a customer can obtain bandwidth resource from multiple providers and a provider can provide resource for multiple customers, which leads to the non-uniqueness of the optimal solution. Indeed, according to the convex optimization theory [52], the objective function is concave with respect to the variables and the constraints are linear, which means that the resource allocation model is a convex optimization problem. However, the objective function is not strict concave with respect to variables , thus the optimal bandwidth allocation from provider to each customer is not unique.  Linear Interactive and General Optimizer (LINGO) is a nonlinear programming software which can be used to solve nonlinear programming problems, some linear and nonlinear equations, and has become one of the best choices for solving optimization models with its powerful functions. As shown in Table 2, we list the simulation results of the proposed algorithm and the optimal solution of the optimization problem by applying LINGO. It is not difficult to find that the total amount of optimal bandwidth resource received by each resource customer is unique, but the optimal bandwidth resource obtained from each resource provider is not unique. This can be understood from the fact that a customer can obtain bandwidth resource from multiple providers and a provider can provide resource for multiple customers, which leads to the non-uniqueness of the optimal solution. Indeed, according to the convex optimization theory [52], the objective function is concave with respect to the variables and the constraints are linear, which means that the resource allocation model is a convex optimization problem. However, the objective function is not strict concave with respect to variables x ps , thus the optimal bandwidth allocation from provider to each customer is not unique. According to Equation (28), we can observe that optimal bandwidth resource allocation obtained by customer s not only relies on the willingness to pay (WTP) of customer, the number of customers in sub-region and the upload bandwidth capacity C uL p of provider, but is also subject to the download bandwidth capacity C d s of customer. In other words, when the amount of bandwidth resources provided by providers increases, a customer may not be able to pick up the entire amount of resources due to its own limited download bandwidth capacity. Therefore, we can determine whether a customer is able to pick up the entire amount of resources provided by providers through the assumption that the amount of resources provided by the providers increases, i.e., through releasing provider's upload bandwidth capacity. As shown in Figure 6, we assume that the upload bandwidth capacity of providers in diverse settings are C uL 1 , C uL 2 , C uL In Figure 7, we investigate the effect of different iterative step sizes on convergence speed. The six sub-figures correspond to the resource allocation for the six customers. The purple, red and blue lines in each sub-figure are regarded as resource allocation with different step sizes. We can generalize the effect of different iteration step sizes on convergence speed by comparing how fastly the lines of different colors finally converge in the sub-figure. For example, in the first sub-figure "Resources for customer 1", the purple, red and blue lines represent the resource allocation in the case of κ ps = α s = β s = 0.1, κ ps = α s = β s = 0.3, κ ps = α s = β s = 0.5, respectively. Obviously, the convergence speed is proportional to step sizes. Thus, it is essential that choosing appropriate step sizes for achieving the optimal resource allocation within reasonable iteration times. Sensors 2020, 20, x FOR PEER REVIEW 21 of 29

Processing the Tasks for Customer r
Assume that the utility functions of customers for the tasks with high requirement on latency and low requirement on processing capacity are: U 1 (y 1 ) = 100 − 50 y 1 , U 2 (y 2 ) = 100 − 40 y 2 , U 3 (y 3 ) = 100 − 30 y 3 , U 4 (y 4 ) = 50 − 50 y 4 , U 5 (y 5 ) = 50 − 40 y 5 , U 6 (y 6 ) = 50 − 30 y 6 . Assume that the upstream capacity of provider is C uH p = C uH 1 , C uH 2 , C uH 3 = (10, 8, 5)Mb/s, the downstream capacity of customer r is 20,18,15,12,10,5)Mb/s and the step sizes κ pr = 0.5, α r = 0.2, β r = 0.2, θ pr = 0.2. The initial rates of the resource allocation are all set to be x pr = 0.5Mb/s. The prices charged by the three providers for upload links are shown in Figure 8. We find that at the equilibrium they are all equal. The prices paid by the six customers are also shown in this figure. It is obvious that at the equilibrium these prices are all equal, i.e., λ * 1 = λ * 2 = λ * 3 = λ * 4 = λ * 5 = λ * 6 . Furthermore, after reasonable iterations, the prices charged by the providers and the prices paid by the resource customers are all equal, namely λ * r = µ H * p , p ∈ P(r).

Processing the Tasks for Customer
Assume that the utility functions of customers for the tasks with high requirement on latency and low requirement on processing capacity are: The prices charged by the three providers for upload links are shown in Figure 8. We find that at the equilibrium they are all equal. The prices paid by the six customers are also shown in this figure. It is obvious that at the equilibrium these prices are all equal, i.e., * = * = * = * = * = * . Furthermore, after reasonable iterations, the prices charged by the providers and the prices paid by the resource customers are all equal, namely * = * , ∈ ( ). Figure 9 shows the utility of each resource customer , , , , , , as well as the aggregated utility = + + + + + . Obviously, the algorithm can drive the customer's utility to reach the optimal value within a limited number of iterations. The utility of each resource customer is related to the number of tasks and time limits, from which we find that the longer the time limits are, the higher the utility of resource customer is and the larger the number of tasks for customer are, the lower the utility of customer is.   Figure 9 shows the utility of each resource customer U 1 , U 2 , U 3 , U 4 , U 5 , U 6 , as well as the aggregated utility TU = U 1 + U 2 + U 3 + U 4 + U 5 + U 6 . Obviously, the algorithm can drive the customer's utility to reach the optimal value within a limited number of iterations. The utility of each resource customer r is related to the number of tasks and time limits, from which we find that the longer the time limits are, the higher the utility of resource customer is and the larger the number of tasks for customer are, the lower the utility of customer is.
The optimal resource allocation is also shown in Figure 10, where x pr represents the amount of bandwidth resource provided by provider p to customer r, y r is the amount of bandwidth resource received by each customer r. Meanwhile, in the "Resource for customer 1" of Figure 10, x 11 , x 21 , x 31 respectively represents the amount of resource provided by the three providers to customer 1 and y 1 represents the total amount of resource obtained by customer 1. Sensors 2020, 20, x FOR PEER REVIEW 23 of 29 Figure 9. Performance of the resource allocation algorithm for customer : Utility.
The optimal resource allocation is also shown in Figure 10, where represents the amount of bandwidth resource provided by provider to customer , is the amount of bandwidth resource received by each customer . Meanwhile, in the "Resource for customer 1" of Figure 10, , , respectively represents the amount of resource provided by the three providers to customer 1 and represents the total amount of resource obtained by customer 1.   The optimal resource allocation is also shown in Figure 10, where represents the amount of bandwidth resource provided by provider to customer , is the amount of bandwidth resource received by each customer . Meanwhile, in the "Resource for customer 1" of Figure 10, , , respectively represents the amount of resource provided by the three providers to customer 1 and represents the total amount of resource obtained by customer 1.  In Table 3, we list the optimal resource allocation obtained from the algorithm and the optimal value obtained by using the nonlinear programming software LINGO. It is not difficult to observe that the total amount of optimal bandwidth resources received by each resource customer is unique, which has been justified in Theorem 1 When the constraints with download links of customers are not active in the simple case, the optimal bandwidth allocation for each customer can also be derived from the algorithm, which is also equivalent to the optimal values from LINGO. According to Equation (40), we can observe that optimal bandwidth resource allocation obtained by customer r not only depends on the amount of task M r of customer r and the upload bandwidth capacity C uH p of provider p, but is also subject to the download bandwidth capacity C d r of customer r. Therefore, we can discuss whether a customer is able to pick up the entire amount of resources provided by providers through the assumption that the amount of resources provided by the providers improves, i.e., increasing provider's upload bandwidth capacity gradually. As shown in Figure 11, we assume that the upload bandwidth capacity of providers in diverse settings are C uH 1 , C uH 2 , C uH In each sub-figure of the Figure 11, the red, blue and purple lines correspond to the resource allocation under different upload bandwidth capacities. Obviously, when the upload bandwidth capacity of each provider increases, the amount of resources received by each customer is constrained by its download bandwidth capacity. For instance, the red line in the sub-figure of "Resources for customer 1" is restricted within 20 / due to its download bandwidth capacity.
In Figure 12, we demonstrate the total bandwidth resource allocation of each customer, when different iteration step sizes are selected. For example, the first sub-figure "Resources for customer 1", the blue curve with parameters = = = 0.5 converges at a significantly faster speed than In each sub-figure of the Figure 11, the red, blue and purple lines correspond to the resource allocation under different upload bandwidth capacities. Obviously, when the upload bandwidth capacity of each provider increases, the amount of resources received by each customer is constrained by its download bandwidth capacity. For instance, the red line in the sub-figure of "Resources for customer 1" is restricted within 20 Mb/s due to its download bandwidth capacity.
In Figure 12, we demonstrate the total bandwidth resource allocation of each customer, when different iteration step sizes are selected. For example, the first sub-figure "Resources for customer 1", the blue curve with parameters κ pr = α r = β r = 0.5 converges at a significantly faster speed than the purple curve with parameters κ pr = α r = β r = 0.1. Based on the previous analysis and discussion on the convergence speed, we summarize that the convergence speed mainly depends on parameters such as step sizes rather than other factors. Therefore, we can get some conclusions, on the one hand, the iteration step sizes should be small enough to ensure convergence, however, it is unnecessary that step sizes are too small to slow the speed of convergence. On the other hand, the step sizes should also be not so large that the algorithm may not converge efficiently within the neighborhood of the optimum.

Conclusions
In recent years, the smart home market has developed rapidly and smart home products in various forms have begun to reach millions of households, greatly improving the convenience of people's living. However, smart home applications are generally intelligent independently, which relies on the cloud platform to achieve remote control and realize collaborative work. Once attacked, there would be a serious threat and impact on the data privacy and security of smart homes. At the same time, as a decentralized computing framework, edge computing can provide networking, storage and computation services to IoT devices and reduce the bandwidth pressure of links. In a smart home environment, generally edge computing may face the issue of network latency and data security due to centralized cloud or data center, so edge computing without using cloud center has broad development potential as a new deployment choice for smart home. This paper discusses the bandwidth resource allocation of edge computing without using cloud center for processing tasks which are decomposed into two types, the tasks with low requirement on latency and high requirement on processing capacity, and the tasks with high requirement on latency and low requirement on processing capacity in a smart home environment. According to the characteristics of different types of tasks, the utility maximization model of edge computing resource allocation is

Conclusions
In recent years, the smart home market has developed rapidly and smart home products in various forms have begun to reach millions of households, greatly improving the convenience of people's living. However, smart home applications are generally intelligent independently, which relies on the cloud platform to achieve remote control and realize collaborative work. Once attacked, there would be a serious threat and impact on the data privacy and security of smart homes. At the same time, as a decentralized computing framework, edge computing can provide networking, storage and computation services to IoT devices and reduce the bandwidth pressure of links. In a smart home environment, generally edge computing may face the issue of network latency and data security due to centralized cloud or data center, so edge computing without using cloud center has broad development potential as a new deployment choice for smart home. This paper discusses the bandwidth resource allocation of edge computing without using cloud center for processing tasks which are decomposed into two types, the tasks with low requirement on latency and high requirement on processing capacity, and the tasks with high requirement on latency and low requirement on processing capacity in a smart home environment. According to the characteristics of different types of tasks, the utility maximization model of edge computing resource allocation is established and further interpreted from an economic point of view. We analyze the relationships between the prices charged by providers (fog nodes) for upload links and the prices paid by customers (users), and propose a gradient-based algorithm which can achieve optimal resource allocation for different tasks. Finally, some numerical examples are given to illustrate the effectiveness and convergence of the proposed algorithm.