MFVL HCCA: A Modified Fast-Vegas-LIA Hybrid Congestion Control Algorithm for MPTCP Traffic Flows in Multihomed Smart Gas IoT Networks

Multihomed smart gas meters are Internet of Things (IoT) devices that transmit information wirelessly to a cloud or remote database via multiple network paths. The information is utilized by the smart gas grid for accurate load forecasting and several other important tasks. With the rapid growth in such smart IoT networks and data rates, reliable transport layer protocols with efficient congestion control algorithms are required. The small Transmission Control Protocol/Internet Protocol (TCP/IP) stacks designed for IoT devices still lack efficient congestion control schemes. Multipath transmission control protocol (MPTCP) based congestion control algorithms are among the recent research topics. Many coupled and uncoupled congestion control algorithms have been proposed by researchers. The default congestion control algorithm for MPTCP is coupled congestion control by using the linked-increases algorithm (LIA). In battery powered smart meters, packet retransmissions consume extra power and low goodput results in poor system performance. In this study, we propose a modified Fast-Vegas-LIA hybrid congestion control algorithm (MFVL HCCA) for MPTCP by considering the requirements of a smart gas grid. Our novel algorithm operates in uncoupled congestion control mode as long as there is no shared bottleneck and switches to coupled congestion control mode otherwise. We have presented the details of our proposed model and compared the simulation results with the default coupled congestion control for MPTCP. Our proposed algorithm in uncoupled mode shows a decrease in packet loss up to 50% and increase in average goodput up to 30%.


Introduction
The goal of the Internet of Things (IoT) is to connect different devices and sensors to the Internet. According to a prediction by the Statista research department, the number of active IoT-connected devices such as sensors, nodes, and gateways will reach up to 30.9 billion units worldwide by 2025 [1]. Smart city is the new trend of the era and the IoT is playing a major role in the design and deployment of smart city infrastructure. The design of a smart city is divided into multiple domains, subsystems, and blocks and it is really difficult to implement an efficient design for a smart city by using the IoT. Some of the important domains of a smart city are electricity and natural gas management, water management, irrigation, waste material, parking space, and intelligent street lighting [2,3]. The datasets used in the design and simulation of such domains are publicly available on the Internet [4,5].

Smart Gas Networks
All over the world, natural gas is being used in homes and industries for heating and fueling purposes. Many research and development organizations are conducting their research in the area of IoT-based smart grids for natural gas [6,7]. Gas utilities gather

Selection of Appropriate Protocol in an IoT Network
The Internet runs on hundreds of protocols; many protocols are supported by IoT and many are still under development. When designing an IoT system, the system requirements should be defined very precisely, and then the right protocol should be chosen to address them. Currently, due to an increase in memory size and processing power, small, embedded devices and modules are capable of running large programs and algorithms. Development of new wireless communication standards such as IEEE 802.11 ah (Wi-Fi Halow) for IoT have also enabled the devices to communicate at much higher data rates over long distances [15,16]. In any communication network with many devices, network congestion is the main issue that causes poor data rates and packet loss [17]. In the IoT, Transmission Control Protocol (TCP) has traditionally been avoided as a transport-layer protocol due to the extra overhead associated with it. However, recent trends and developments in IoT devices and networks are favoring TCP for congestion control and end-to-end reliable delivery of data [18].

Multipath Transmission Control Protocol (MPTCP) in Multihomed Devices
Modern IoT devices are equipped with multiple network interfaces, capable of simultaneously connecting to multiple network links and different Internet Protocol (IP) addresses. These links can be used for concurrent transfer of data, then if any links fail others can be used for successful data delivery. The multipath transmission control protocol (MPTCP) is embedded in all such modern multihomed devices. Multihoming is defined as the ability of a host or device to simultaneously connect to multiple heterogeneous or homogeneous networks [19]. Multihomed devices with MPTCP protocol support, divide the application's data into multiple streams, and then utilize multiple network paths simultaneously for data transmission/reception. Load balancing, congestion control, and dynamic switching are handled by the protocol in order the improve throughput and quality of service [20].

Smart Gas Grid Infrastructure
In Figure 1, we present the structure of an IoT-based smart natural gas grid that uses multihomed smart gas meters with dual low power Wi-Fi Halow interfaces capable of simultaneously connecting to two gateways of different Internet service providers (ISPs) for parallel transfer of data to server. The information received from these smart meters, and from some other data sources, for example, weather information, is then used by the smart grid for short-term load forecasting (STFL). Deep learning methods are used for accurate load forecasts. Gas distribution management makes decisions according to the forecasts for intelligent distribution of gas to different areas. In such IoT networks where end-to-end reliable transmission of data from smart meters to a server is required, TCP is always preferred over User Datagram Protocol (UDP) and MPTCP is used in multihomed devices.

Smart Gas Grid Infrastructure
In Figure 1, we present the structure of an IoT-based smart natural gas grid that uses multihomed smart gas meters with dual low power Wi-Fi Halow interfaces capable of simultaneously connecting to two gateways of different Internet service providers (ISPs) for parallel transfer of data to server. The information received from these smart meters, and from some other data sources, for example, weather information, is then used by the smart grid for short-term load forecasting (STFL). Deep learning methods are used for accurate load forecasts. Gas distribution management makes decisions according to the forecasts for intelligent distribution of gas to different areas. In such IoT networks where end-to-end reliable transmission of data from smart meters to a server is required, TCP is always preferred over User Datagram Protocol (UDP) and MPTCP is used in multihomed devices.

Problem Analysis
With the growing number of devices in IoT networks, the network congestion also increases. For smooth data transfer, a reliable transport layer protocol with an efficient congestion control algorithm is required. The Internet of Things uses small TCP/IP stacks with very limited capabilities. Many vulnerabilities and flaws have been found in such stacks. Growing technological developments are producing powerful small devices and large IoT networks with high data rates, and therefore network congestion is a critical issue. MPTCP is a protocol used for multipath data transfer. Many congestion control algorithms have been proposed by different researchers for MPTCP and the performance evaluation of these algorithms is still under debate. There is a lot of research going on, to design new congestion control algorithms for MPTCP. The default coupled congestion control by using linked-increases algorithm (LIA) of MPTCP ensures fairness in the case of a shared bottleneck but suffers from low throughput otherwise, due to its coupled architecture. In the case of a smart gas grid, the goal of a congestion control algorithm is to increase goodput for timely transmission of important data and decrease packet retransmissions to save power. To fulfill this requirement, we have proposed a novel con-

Problem Analysis
With the growing number of devices in IoT networks, the network congestion also increases. For smooth data transfer, a reliable transport layer protocol with an efficient congestion control algorithm is required. The Internet of Things uses small TCP/IP stacks with very limited capabilities. Many vulnerabilities and flaws have been found in such stacks. Growing technological developments are producing powerful small devices and large IoT networks with high data rates, and therefore network congestion is a critical issue. MPTCP is a protocol used for multipath data transfer. Many congestion control algorithms have been proposed by different researchers for MPTCP and the performance evaluation of these algorithms is still under debate. There is a lot of research going on, to design new congestion control algorithms for MPTCP. The default coupled congestion control by using linked-increases algorithm (LIA) of MPTCP ensures fairness in the case of a shared bottleneck but suffers from low throughput otherwise, due to its coupled architecture. In the case of a smart gas grid, the goal of a congestion control algorithm is to increase goodput for timely transmission of important data and decrease packet retransmissions to save power. To fulfill this requirement, we have proposed a novel congestion control algorithm for MPTCP and compared our results with the default coupled congestion control algorithm of MPTCP.

Contribution
Our main contribution in this research is the design of a hybrid congestion control algorithm for MPTCP. The default MPTCP scheduler is used that distributes packets among subflows by observing round trip time delays and congestion windows. As soon as there is Electronics 2021, 10, 711 4 of 27 a space available in the queue of a subflow, packets are injected into the pipe. The proposed modified Fast-Vegas-LIA hybrid congestion control algorithm (MFVL HCCA) is based on a modified Fast TCP, modified TCP Vegas, and LIA congestion control algorithms. It uses a shared bottleneck detection method and works in uncoupled mode or coupled mode accordingly. We simulated our design in Network Simulator 2.34 (NS 2.34) and compared the results with the default coupled congestion control LIA of MPTCP.
The remainder of this paper is organized as follows: In Section 2, we give an overview of background and related research work; in Section 3, we describe the details of the proposed model; the results and discussions are presented in Section 4; and our conclusions are atated in Section 5.

Background and Related Work
Many protocols for the IoT are available and research is still going on to discover new protocols for this area. Researchers use different protocol stacks after carefully observing the requirements of an IoT system to fulfill its needs [21]. Some of the Application Layer protocols for IoT devices that require TCP at the transport layer for reliable communication are Extensible Messaging and Presence Protocol (XMPP), Message Queuing Telemetry Transport (MQTT) Protocol, and Advanced Message Queuing Protocol (AMQP). The MQTT Protocol is widely used for data transmission between devices and servers. However, any other suitable protocol can also be used [22].
For memory constrained devices, some TCP/IP protocol stacks with limited functionality are available. These stacks are open source and many embedded system developers and programmers from all around the world are making improvements in the codes and functionalities [23]. Millions of IoT devices are using these stacks. According to a recent report by Forescout Research Labs, 33 vulnerabilities have been found in uIP, FNET, pi-coTCP, and Nut/Net stacks. An attacker can exploit these flaws to take full control of the device, execute code remotely, and steal data [24]. Therefore, these tiny open-source protocol stacks are no longer trustworthy.
With the increase in processing speed, battery capacity and memory size of IoT devices, now, more advanced algorithms and standard protocols are required to fulfill the requirements of IoT systems. By 2025, billions of IoT devices will be connected with the Internet, sending tens of zettabytes of data [1]. Some of the most commonly used physical layer protocols for IoT are 802. 15.4,, Bluetooth Low Energy, and ZigBee Smart. A new low power and long range Wi-Fi standard 802.11 ah with the name "Wi-Fi HaLow" has been developed for IoT devices [15]. It has a throughput range from hundreds of kilobits per second (kbps) to tens of megabits per second (Mbps) [25]. Table 1 shows the characteristics of different physical layer standards for IoT [26]. IoT devices with multiple interfaces are multihomed devices. These devices can connect to different networks for concurrent transfer of data to a host/server [27]. As the number of devices and data rate increases, network congestion also increases. Therefore, a transport layer protocol with an efficient congestion control algorithm is also required for IoT networks. The small TCP/IP stacks used in IoT devices are lacking such algorithms. In multihomed devices, the MPTCP is used at the transport layer. The MPTCP transmits data over multiple paths by using subflows. The aim of this scheme is to increase throughput and robustness [28]. One common application of multipath communication in mobile phones is to use Wi-Fi and 3G paths simultaneously to transfer data in parallel, so that if any network path fails then another could be used for data transfer [29]. However, an IoT device can also have multiple interfaces of same standard available, for example, Wi-Fi to connect to multiple Wi-Fi networks of different ISPs for concurrent transfer of data.
The MPTCP also performs load balancing among different paths. In the MPTCP, separate congestion windows are maintained on each path; however, in order to prevent harm to fairness, especially in case of a shared bottleneck link, execution of congestion control independently on each path is avoided by most algorithms. Many congestion control algorithms have been proposed to couple together all the sublows of a single multipath flow in order to achieve fairness and efficiency [30]. A congestion control algorithm can be delay based, loss based, or hybrid. Most of the congestion control algorithms for the MPTCP are loss based. To ensure fairness and better performance over the Internet, three design goals of the MPTCP congestion control algorithm have been defined. The goals are that a multipath flow should at least perform as well as a single path TCP flow; if more than one subflow shares a single bottleneck link, then, the multipath subflows should not harm other TCP flows; and a multipath flow should utilize the less congested path more than the congested one [31].
Several protocol designs from various authors are available for multipath data transfer. A protocol pTCP was proposed to transfer data concurrently through multiple paths [32]. In [33], the authors considered the wireless link as bottleneck to ensure protocol fairness and proposed a method to utilize the total bandwidth available on multiple paths of a multihomed mobile host. In [34], the authors proposed the design of a multipath TCP and discussed, in detail, the algorithm for detecting shared congestion at the bottleneck link by using fast retransmit events of different paths. In [35], the authors proposed a concurrent multipath transfer method (CMT) by using Stream Control Transmission Protocol (SCTP). The CMT-SCTP is the improved version of SCTP for implementing multipath transfer in multihomed hosts. All these schemes use uncoupled congestion control by running separate congestion control on each subflow. In addition, some of the protocols show a high degree of unfairness, in the case of shared bottleneck.
In order to solve the issue of unfairness when multiple subflows of an MPTCP connection share the same bottleneck link, coupled congestion control algorithms have been proposed. In coupled congestion control algorithms, the congestion window of each subflow is updated by keeping in view the total congestion window of all subflows. The aim is to ensure bottleneck fairness and overall fairness in the network. In the case of a shared bottleneck link, these algorithms work efficiently but the possibility of a common bottleneck link shared by multiple flows is very rare. In the absence of a shared bottleneck link, the performance of these algorithms only results in the underutilization of available bandwidth. Several loss-based coupled congestion control based schemes such as LIA [31], balanced linked adaptation (BALIA) [36], opportunistic linked-increases algorithm (OLIA) [37] have been proposed. All these algorithms only control the increase mechanism of congestion window in congestion avoidance phase and the rest of the phases are like TCP Reno. In loss-based algorithms, the congestion window is adjusted on packet loss detection, so packet retransmissions are high and packet losses only give a rough estimate of network congestion. Another loss-based algorithm Dynamic LIA(D-LIA) [38] has been proposed that dynamically controls the decreasing mechanism of congestion window less aggressively but this behavior is not efficient and only adds more packet losses.
A delay-based congestion control algorithm weighted Vegas (WVegas) [30] based on TCP Vegas has been proposed. The WVegas algorithm uses fine grained load balancing by using packet queuing delay as congestion signals. This algorithm shows low packet losses and better intra-protocol fairness. The authors have tried to improve fairness and load balancing more than the throughput. In order to resolve the issue of underutilization of bandwidth by WVegas in large bandwidth delay product networks, another delay-based algorithm, MPFast [39], has been proposed. The MPFast algorithm uses Fast TCP as a congestion control algorithm for multipath transfer. But the aggressive behavior of Fast TCP causes more packet losses in congested networks. In [28], the authors proposed machine learning methods for MPTCP path management to select high quality paths.
In [20], a new MPTCP scheme with application distributor for low memory multihomed IoT devices was proposed. The aim of the scheme was to solve the buffer blocking issues in multihoming. In [40], the authors proposed an energy efficient congestion control scheme, emReno based on mutipath TCP, to shift traffic from one path to a low cost energy path. Table 2 shows the comparison of existing MPTCP congestion control algorithms with the proposed MFVL HCCA.

Proposed Modified Fast-Vegas-LIA Hybrid Congestion Control Algorithm (MFVL HCCA) Design
In this section we present our proposed algorithm design. The algorithm MFVL is a hybrid congestion control algorithm which uses the following algorithms as submodules:
After discussing details of these submodules, we explain the functionality of our main algorithm.

Modified TCP Vegas Congestion Control Algorithm (MVegas)
Brakmo and Peterson proposed a delay-based algorithm called TCP Vegas for reliable transfer of data. According to the authors, it can achieve a much higher throughput than a loss-based algorithm TCP Reno and less packet retransmissions are required [41]. However, a drawback of the TCP Vegas is the inability to receive a fair bandwidth share when competing with TCP Reno and some other TCP variants. The TCP Vegas always consumes less bandwidth as compared with others [42]. In [43], the authors proposed modifications in the TCP Vegas to overcome this problem.
Hence, some modifications are required in the original TCP Vegas to improve its performance in terms of overcoming the loss of bandwidth consumption. The TCP Vegas adjusts its congestion window size by calculating the difference between expected and actual throughput. A greater difference is the result of increased round trip time (RTT) delay and indicates that the network is congested. The TCP Vegas starts with a slow start, and then switches to congestion avoidance mode.

Slow Start
Vegas uses threshold γ during slow start. The default value of γ is 1. When the difference (expected throughput-actual throughput) is less than γ, the congestion window (cwnd) is increased by 1 every other RTT. Hence, the cwnd in slow start grows exponentially, but at a slower rate than TCP Reno. When the difference is larger than γ or the value of cwnd becomes equal to the threshold value for congestion window in slow start (ssthresh), then, the congestion avoidance phase starts. Upon leaving slow start phase the cwnd is decreased by 1/8 of its current value in order to prevent network congestion.

Congestion Avoidance
During the congestion avoidance phase, two threshold constants, α and β, are used. In the TCP Vegas, the congestion control algorithm tries to maintain the number of packets in network queues between a minimum and maximum value in order to prevent network congestion and avoid packet loss. The minimum value is represented by α and the maximum value is represented by β. The default value of α is 1 and for β it is 3 in NS-2.34. The cwnd is updated according to Equation (1). Equation (2) gives the difference between the expected throughput and actual throughput. RTT is the observed RTT and baseRTT is the minimum observed RTT.
In our proposed design, we have made some modifications, only to the congestion avoidance phase. Let di f f t and di f f (t−1) represents the differences at current time t and previous time (t − 1). Let ϕ represent the ratio of differences, then If di f f t is less than di f f (t−1) , it shows a gradual decline in network congestion and, correspondingly, ϕ holds a value greater than 1. If di f f t is greater than di f f (t−1) , it shows a gradual increase in network congestion and, consequently, ϕ will have a value less than 1. In order to make Vegas a bit more aggressive, for the purpose of consuming the shared bandwidth efficiently, we have introduced two increasing factors α and β . Their current values at time "t" are represented by α t and β t . On reset(start), in our proposed model, they are set to constant values of 1 and 2, respectively. Their values are increased or decreased dynamically at run time by sensing the current difference and previous difference between throughputs. The updated values after dynamic change are represented by α t+1 and β t+1 , respectively. Our proposed model tries to keep packets between (α + α t ) and (β + β t ) in network queues by adjusting the cwnd accordingly. The difference between throughputs is calculated and if (di f f t ) is less than (α + α t ), it shows room for more packets to be injected into the network and, consequently, the cwnd is updated according to Equation (4) as follows: The value of α t+1 is calculated by using Equation (5) as follows: where A value of ϕ > 1 indicates that there is a decrease in network congestion over time, and as a result, the proposed model increases α and β dynamically by adding the value of ϕ in their current values. The updated α t+1 can have a maximum value equal to (k * α) where k is a constant and its value lies between 1 to 2. With k = 1, α t+1 can achieve a maximum value of 1, because the default value of α is 1, and with k = 2, α t+1 can have a maximum value of 2. In the proposed design, k was set to 2 after carefully evaluating it through different experiments. We observed that increasing it further, increases the maximum value of α t+1 , resulting in a more aggressive increase in congestion window, and thus resulting in more packet drops. Decreasing k below 1, results in a decreased maximum value of α t+1 , which results in underutilization of available bandwidth due to less aggressive growth of the congestion window.
The value of β t+1 is calculated by using Equation (7) (β t+1 can have a maximum value equal to β) as follows: where Any value greater than the maximum value can result in higher packet loss due to a more aggressive increase in the congestion window.
If di f f t > β + β t , it indicates that the network is currently congested, hence, the congestion window is decreased, according to Equation (9), as follows: and β t+1 is calculated, according to Equation (10), as follows: where β t+1 can only be decreased to a minimum value of α + 1, and further reducing it will make it equal to or less than α which would result in the underutilization of available bandwidth and network queues. For the sake of avoiding severe degradation in the utilization of available bandwidth, we decided not to decrease α dynamically in the case Electronics 2021, 10, 711 9 of 27 of network congestion, as it is a factor that decides the minimum number of packets to be kept in network queues.

Loss Recovery
When packet loss is detected (due to time out), then ssthresh is set to half of the cwnd, cwnd is reset to 2, and slow start phase starts again. When three duplicate ACKs are received, then fast retransmission and fast recovery is executed. After fast retransmit, cwnd is set to 3 4 of the current cwnd, and congestion avoidance phase is executed again.

Modified Fast TCP Congestion Control Algorithm
Another congestion control algorithm Fast TCP [44] uses queuing delay along with packet loss to detect network congestion. The Fast TCP updates its window according to Equation (12), under which the network moves toward equilibrium rapidly in larger steps and slows down by taking smaller steps near equilibrium.
The Fast TCP adjusts its window in three phases as follows: slow start (SS), multiplicative increase (MI), and exponential convergence (EC). The Fast TCP uses the same slow start algorithm of TCP Reno with only a slight variation by using a threshold gamma. The Fast TCP exits slow start when the number of packets present in the network queue exceeds gamma. In order to achieve equilibrium, the Fast TCP uses MI; the Fast TCP increases or decreases its window on alternate RTTs in both MI and EC phases as a protection measure. When a packet loss is detected, the window is reduced to half and loss recovery phase starts.
In EC phase, the window is increased exponentially. The new window size is calculated by using Equation (12) as follows: where γ ∈ [0, 1], baseRTT is the current minimum RTT, and qdelay represents the end-toend queuing delay (average), and α(w, qdelay) is a constant that represents the number of packets each flow tries to maintain in the network buffer(s) at equilibrium [45]. In our algorithm, we modified this behavior of the Fast TCP by introducing a controlling factor σ. The modified equation is given as follows: where σ is a constant and its value can be adjusted from 0 to 1. Towards 0, Fast TCP will behave less aggressively and towards 1, Fast TCP will move towards its natural aggressive behavior. In our design, we used σ = 0.5, after testing its different values for different simulation scenarios. By increasing the value of this controlling factor, more throughput can be achieved but unfairness and packet drop also increases.

Shared Bottleneck Detection and Coupled Congestion Control
In [34], the authors proposed a shared congestion detection method using fast retransmits. If the two subflows or paths share the same bottleneck congestion, they are said to be "correlated" otherwise "independent". We have assumed that the latency of paths is equal. Every time a fast retransmit event happens at any subflow, the time of the event in that specific subflow is recorded in a list. In this way, two lists, i.e., A and B, with timestamps (a1, a2, . . . , am) and (b1, b2, . . . , bn) from subflows S1 and S2 are obtained. Then, timestamps from A and B are compared in such a way that if |ai − bj| < interval, then (ai,bj) is said to be a match. A match shows that packets were lost and that a fast retransmit event took place at both paths at about the same time. Therefore, both paths probably share the same congested link. The maximum number of matched pairs (ai,bj) are represented by match(A,B). The term min(m,n) gives the minimum number of recorded fast retransmit timestamps from lists A and B. Two subflows are considered to be sharing the same bottleneck congested link if detection results in a value greater than a threshold δ.
The authors in [34], after performing several experiments, showed that by using interval = 200 ms and δ = 0.5, the shared congestion could be detected successfully. We used the same method to detect shared bottleneck. If shared bottleneck is detected, then, our proposed model switches to coupled congestion control mode and selects default coupled congestion control LIA of MPTCP [31] for both subflows.
LIA uses the following steps for only increasing the cwnd in congestion avoidance phase (the remaining phases behave similar to a standard TCP algorithm): cwnd_count_i is the number of segments acked since the last cwnd i increment; • cwnd i is incremented by 1 (and after increment, cwnd_count_i is set to 0); when cwnd count > max al pha scale * cwnd total al pha , cwnd i , where al pha = al pha scale * cwnd total * cwnd max and al pha scale is the precision parameter. According to [31], setting al pha scale to 512 works well in most cases. We used the same algorithm in our model without any modifications.

Main Functionality of MFVL HCCA and Modes of Operation
Here, we discuss the main flow of our algorithm. Figure 2a shows the IoT protocol stack with MPTCP as transport layer protocol. The system architecture is shown in Figure 2b. A smart meter with two Wi-Fi Halow interfaces is connected to a remote database server via multipath connectivity. The multipath flow is divided into two subflows. Two gateways from different ISPs are used for multipath connectivity. Our proposed congestion control algorithm maintains separate congestion windows for each subflow. The MFVL HCCA operates in the following two modes:
Coupled mode.
On reset, the MFVL algorithm starts in an uncoupled mode and runs the modified TCP Vegas congestion control algorithm for each subflow. On every packet loss, packet retransmission takes place. For a duration of T (s), the number of retransmitted packets belonging to a particular subflow are counted, and the timestamps are also saved in an array. After the time T, shared bottleneck detection is performed by using the timestamps of packet retransmissions for each subflow. The probability of a bottleneck link shared by two subflows belonging to the same multipath flow is very rare but if shared bottleneck is detected, then the MFVL algorithm switches to coupled congestion control mode and LIA is selected as the coupled congestion control algorithm for both subflows.
If no shared bottleneck is detected, then, the MFVL Algorithm keeps working in uncoupled congestion control mode. The number of retransmitted packets of Subflow 1 and Subflow 2 are compared. The modified Fast algorithm is selected for subflow with less packet retransmissions and the modified Vegas is selected for subflow with more packet retransmissions. After time T, the shared bottleneck detection and selection process are repeated again by observing packet retransmissions

Algorithm Limitations
The shared bottleneck detection method used in the proposed algorithm works well when the two subflows (paths) experience the same network latency, but in the case of different network delays, time synchronization is an issue. Hence, further improvements in the design are required. In the future, we are planning to improve the efficiency of the shared bottleneck detection method by reducing the time required to detect a shared congested link and to handle multiple subflows experiencing different network delays.

Algorithm Explanation
The notations used in the proposed algorithm are given in Table 3 along with their definitions.  For bottleneck detection, the number of matched pairs is divided by the minimum number of array elements. If the answer is greater than bn_thresh (bottleneck detection threshold value), then, there is a high probability that a shared bottleneck is present, otherwise absent. In the case of shared bottleneck, the default coupled congestion control LIA is used for both subflows (paths).
If shared bottleneck is not detected, then the numbers of packet retransmissions in both subflows are compared. The modified TCP Vegas congestion control algorithm is selected for subflow with more packet retransmissions and the modified Fast TCP congestion control algorithm is selected for subflow with less packet retransmissions. The algorithm then jumps to label "Again" and from t = 0 to t = T s the whole procedure is repeated again for the next cycle. The flow chart of the proposed algorithm is shown in Figure 2c. if (Packet Retransmission takes place at subflow1 ) then 10:

Simulations, Results, and Discussions
All the simulations were done using NS-2.34. We used wired-cum-wireless network topology. For the MPTCP simulation, the available MPTCP module [46] with coupled congestion control LIA (MPTCP-CC-LIA) for NS2 was used. Modifications were done to the original Fast TCP and TCP Vegas codes in NS-2.34, and then these modified codes were used to implement the proposed MFVL HCCA for the MPTCP. Different experiments were performed to compare the performance of our proposed model with MPTCP-CC-LIA. The final results were plotted using GNUPLOT. Figure 3 shows the network connections. Node "S" is a multihomed source node with two interfaces "S_0" and "S_1" for wireless connections to gateways. A multipath flow is divided into two subflows. Packets transmitted through S_0 represent Subflow 1. Packets transmitted through S_0 represent Subflow 2. Selected parameters for wireless connections are based on 802.11 standard of NS2. The MPTCP agent with File Transfer Protocol (FTP) traffic generator is connected to source node. The source node connects wirelessly to two different gateway nodes, GW1 and GW2, via interfaces S_0 and S_1. The gateways are further connected to routers through wired links. Node "D" is the destination multihomed node with two wireless interfaces "D_0" and "D_1". Node "D" was connected wirelessly to two gateway nodes, "GW3" and"GW4", through D_0 and D_1, respectively. The connection between R1 and R2 shows the bottleneck link for Path 1 of data rate 3 Mbps, 50 ms delay, and queue limit of 100 packets. R3-R4 shows the bottleneck link for Path 2 of data rate 1 Mbps, 50 ms delay, and queue limit of 100 packets. The remaining wired connections have data rate 10 Mbps and 10 ms delay. Nodes N1 abd N3 are used to inject background traffic to create path congestion; constant bit rate (CBR) traffic generators with UDP agents are used. The packet size of CBR and TCP is set to 1000 bytes. The maximum window size for each interface is set to 100 packets. Null agents are attached with nodes N2 and N4. The UDP agents are connected to null agents. GW1-R1-R2-GW3 represents Path 1. GW2-R3-R4-GW4 represents Path 2.
Packets transmitted through S_0 represent Subflow 2. Selected parameters for wireless connections are based on 802.11 standard of NS2. The MPTCP agent with File Transfer Protocol (FTP) traffic generator is connected to source node. The source node connects wirelessly to two different gateway nodes, GW1 and GW2, via interfaces S_0 and S_1. The gateways are further connected to routers through wired links. Node "D" is the destination multihomed node with two wireless interfaces "D_0" and "D_1". Node "D" was connected wirelessly to two gateway nodes, "GW3" and"GW4", through D_0 and D_1, respectively. The connection between R1 and R2 shows the bottleneck link for Path 1 of data rate 3 Mbps, 50 ms delay, and queue limit of 100 packets. R3-R4 shows the bottleneck link for Path 2 of data rate 1 Mbps, 50 ms delay, and queue limit of 100 packets. The remaining wired connections have data rate 10 Mbps and 10 ms delay. Nodes N1 abd N3 are used to inject background traffic to create path congestion; constant bit rate (CBR) traffic generators with UDP agents are used. The packet size of CBR and TCP is set to 1000 bytes. The maximum window size for each interface is set to 100 packets. Null agents are attached with nodes N2 and N4. The UDP agents are connected to null agents. GW1-R1-R2-GW3 represents Path 1. GW2-R3-R4-GW4 represents Path 2.
In the presented simulation scenarios, no shared bottleneck path is used because we are more interested in analyzing the results of our proposed algorithm in uncoupled mode which uses the modified Vegas and modified Fast algorithms. In coupled mode, the algorithm works according to MPTCP LIA, therefore, the behavior is the same as that of default coupled congestion control of MPTCP. To implement our proposed design, the required modifications were done in ns-defaults.tcl and other NS2 "c" files. NS2 was recompiled by using "make". Various experiments were run by using the same simulation topology. The simulation time, represented by "T" was set to 50 s. The CBR data rate was initially set to 1 Mbps. Same CBR start stop pattern was used for all experiments. The CBR start stop pattern is shown in Figure 4 and throughput in Figure 5. We have written different awk scripts to calculate the average goodput, average packet drop rate, average goodput, and packet drop vs. CBR datarate. Simulation parameters are given in Table 4. The default values used for modified Vegas and modified Fast algorithms are shown in Tables 5 and 6, respectively. In the presented simulation scenarios, no shared bottleneck path is used because we are more interested in analyzing the results of our proposed algorithm in uncoupled mode which uses the modified Vegas and modified Fast algorithms. In coupled mode, the algorithm works according to MPTCP LIA, therefore, the behavior is the same as that of default coupled congestion control of MPTCP.
To implement our proposed design, the required modifications were done in nsdefaults.tcl and other NS2 "c" files. NS2 was recompiled by using "make". Various experiments were run by using the same simulation topology. The simulation time, represented by "T" was set to 50 s. The CBR data rate was initially set to 1 Mbps. Same CBR start stop pattern was used for all experiments. The CBR start stop pattern is shown in Figure 4 and throughput in Figure 5. We have written different awk scripts to calculate the average goodput, average packet drop rate, average goodput, and packet drop vs. CBR datarate. Simulation parameters are given in Table 4. The default values used for modified Vegas and modified Fast algorithms are shown in Tables 5 and 6, respectively.

Setup One
In the first setup, we implemented our multipath uncoupled congestion control algorithm as follows: (a) Using modified TCP Vegas (MVegas) congestion control on both subflows; (b) Using modified Fast TCP (MFast) congestion control on both subflows.
We compared its performance with MPTCP-CC-LIA. The results in Figure 6 show the average goodput graphs of three different multipath flows. The goodput of a single multipath flow is the aggregate goodput of its individual subflows. The multipath flow with MFast (MPTCP-MFast) achieves better average goodput than MPTCP-MVegas and MPTCP-CC-LIA. The CBR traffic generator starts generating background traffic at t = 5 s and stops at t = 15 s. It again starts at t = 35 s and stops at t = 45 s. The CBR traffic pattern for both paths is the same. In simulation, the FTP traffic for both paths starts at t = 1.5 s and stops at t = 50 s. When FTP traffic starts, then MVegas and MFast keep on increasing their congestion windows until packet drops are observed. The more congested path (Path 2) experiences more packet drops when CBR is started in the background.

Setup One
In the first setup, we implemented our multipath uncoupled congestion control algorithm as follows: (a) Using modified TCP Vegas (MVegas) congestion control on both subflows; (b) Using modified Fast TCP (MFast) congestion control on both subflows.
We compared its performance with MPTCP-CC-LIA. The results in Figure 6 show the average goodput graphs of three different multipath flows. The goodput of a single multipath flow is the aggregate goodput of its individual subflows. The multipath flow with MFast (MPTCP-MFast) achieves better average goodput than MPTCP-MVegas and MPTCP-CC-LIA. The CBR traffic generator starts generating background traffic at t = 5 s and stops at t = 15 s. It again starts at t = 35 s and stops at t = 45 s. The CBR traffic pattern for both paths is the same. In simulation, the FTP traffic for both paths starts at t = 1.5 s and stops at t = 50 s. When FTP traffic starts, then MVegas and MFast keep on increasing their congestion windows until packet drops are observed. The more congested path (Path 2) experiences more packet drops when CBR is started in the background.

Setup One
In the first setup, we implemented our multipath uncoupled congestion control algorithm as follows: (a) Using modified TCP Vegas (MVegas) congestion control on both subflows; (b) Using modified Fast TCP (MFast) congestion control on both subflows.
We compared its performance with MPTCP-CC-LIA. The results in Figure 6 show the average goodput graphs of three different multipath flows. The goodput of a single multipath flow is the aggregate goodput of its individual subflows. The multipath flow with MFast (MPTCP-MFast) achieves better average goodput than MPTCP-MVegas and MPTCP-CC-LIA. The CBR traffic generator starts generating background traffic at t = 5 s and stops at t = 15 s. It again starts at t = 35 s and stops at t = 45 s. The CBR traffic pattern for both paths is the same. In simulation, the FTP traffic for both paths starts at t = 1.5 s and stops at t = 50 s. When FTP traffic starts, then MVegas and MFast keep on increasing their congestion windows until packet drops are observed. The more congested path (Path 2) experiences more packet drops when CBR is started in the background.  The packet drop starts after t = 5 s and rises sharply. Figure 7 shows the aggregate packet drops of both subflows. On detecting packet loss, the congestion window of that specific subflow is decreased, and therefore the aggregate goodput is also decreased. Figure 7 shows the average packet drop rate corresponding to each multipath flow. The packet drop rate of a single multipath flow is the aggregate packet drop rate of its individual subflows. The average packet drop rate of MPTCP-MFast is also higher as compared with MPTCP-MVegas and MPTCP-CC-LIA. The packet drop starts after t = 5 s and rises sharply. Figure 7 shows the aggregate packet drops of both subflows. On detecting packet loss, the congestion window of that specific subflow is decreased, and therefore the aggregate goodput is also decreased. Figure 7 shows the average packet drop rate corresponding to each multipath flow. The packet drop rate of a single multipath flow is the aggregate packet drop rate of its individual subflows. The average packet drop rate of MPTCP-MFast is also higher as compared with MPTCP-MVegas and MPTCP-CC-LIA. The congestion window plots of multipath flows MPTCP-MVegas and MPTCP-MFast are shown in Figures 8 and 9, respectively; the congestion window graph belonging to each subflow is shown. Path 1 is less congested than Path 2. Subflow 1 utilizes Path 1 and Subflow 2 utilizes Path 2. The results show that MPTCP-MVegas is less aggressive on both subflows as compared with MPTCP-MFast. This behavior of MVegas is very beneficial to avoid packet drop on path with more congestion. MPTCP-MFast behaves aggressively on more congested path which results in more packet drops. By comparing the congestion windows of MFast and MVegas for more congested path (Path 2), we observe that when CBR in the background starts at t = 5 s, then both the windows are already in congestion avoidance phase with the window size of MFast more than double the size of MVegas, but on detecting packet drop, MFast decreases its window and reduces it to almost zero at t = 7 s, but MVegas reduces the congestion window in a very slow rate by using cwnd = cwnd − 1. The multiplicative increase in cwnd of MFast gives rise to more packet drops. After t = 15 s, both go into slow start again. The congestion window plots of multipath flows MPTCP-MVegas and MPTCP-MFast are shown in Figures 8 and 9, respectively; the congestion window graph belonging to each subflow is shown. Path 1 is less congested than Path 2. Subflow 1 utilizes Path 1 and Subflow 2 utilizes Path 2. The results show that MPTCP-MVegas is less aggressive on both subflows as compared with MPTCP-MFast. This behavior of MVegas is very beneficial to avoid packet drop on path with more congestion. MPTCP-MFast behaves aggressively on more congested path which results in more packet drops. By comparing the congestion windows of MFast and MVegas for more congested path (Path 2), we observe that when CBR in the background starts at t = 5 s, then both the windows are already in congestion avoidance phase with the window size of MFast more than double the size of MVegas, but on detecting packet drop, MFast decreases its window and reduces it to almost zero at t = 7 s, but MVegas reduces the congestion window in a very slow rate by using cwnd = cwnd − 1. The multiplicative increase in cwnd of MFast gives rise to more packet drops. After t = 15 s, both go into slow start again.

Setup Two
In the second setup, for further analysis we implemented our multipath hybrid congestion control algorithm MFVL as follows: (a) Using modified TCP Vegas congestion control (MVegas) on subflow with less congested path; (b) Using modified Fast TCP congestion control (MFast) on subflow with more congested path.
Our algorithm achieved higher average goodput than MPTCP-CC-LIA, as shown in Figure 10; however, the packet drop rate, as shown in Figure 11, also increased due to the aggressive behavior of MFast on subflow with more congested path (Path 2). Congestion windows are plotted in Figure 12.

Setup Two
In the second setup, for further analysis we implemented our multipath hybrid congestion control algorithm MFVL as follows: (a) Using modified TCP Vegas congestion control (MVegas) on subflow with less congested path; (b) Using modified Fast TCP congestion control (MFast) on subflow with more congested path.
Our algorithm achieved higher average goodput than MPTCP-CC-LIA, as shown in Figure 10; however, the packet drop rate, as shown in Figure 11, also increased due to the aggressive behavior of MFast on subflow with more congested path (Path 2). Congestion windows are plotted in Figure 12.

Setup Three
After observing the results of the first two experiments, we finalized the design of our proposed algorithm by deciding to use MVegas on subflow with more congested path and MFast on subflow with less congested path. The path congestion was detected by recording packet retransmissions for each subflow. We also implemented a shared bottleneck detection method in our algorithm. In real scenarios, the probability of a congested path shared by both subflows is very low and if a shared bottleneck is detected, then our algorithm switches to the standard coupled congestion control algorithm LIA. The simulations were run by using CBR data rate of 1 Mbps. The results show that MPTCP with our proposed model MFVL HCCA (MPTCP-MFVL) achieves higher throughput and less packet drop rate as compared with MPTCP with coupled congestion control LIA (MPTCP-CC-LIA). The results are shown in Figures 13 and 14. To analyze the performance of our proposed model, here, we plotted the results for the time duration of simulation time t = 0 to t = T s (here, T = 50) when the algorithm has already selected MVegas for subflow with more congested path (Path 2) and MFast for subflow with less congested path (Path 1) after detecting path congestion by calculating packet retransmissions for each subflow. Now, for this time duration, the packet retransmissions are calculated again for each subflow to take decisions for the next cycle.
LIA. The simulations were run by using CBR data rate of 1 Mbps. The results show that MPTCP with our proposed model MFVL HCCA (MPTCP-MFVL) achieves higher throughput and less packet drop rate as compared with MPTCP with coupled congestion control LIA (MPTCP-CC-LIA). The results are shown in Figures 13 and 14. To analyze the performance of our proposed model, here, we plotted the results for the time duration of simulation time t = 0 to t = T s (here, T = 50) when the algorithm has already selected MVegas for subflow with more congested path (Path 2) and MFast for subflow with less congested path (Path 1) after detecting path congestion by calculating packet retransmissions for each subflow. Now, for this time duration, the packet retransmissions are calculated again for each subflow to take decisions for the next cycle.   MPTCP with our proposed model MFVL HCCA (MPTCP-MFVL) achieves higher throughput and less packet drop rate as compared with MPTCP with coupled congestion control LIA (MPTCP-CC-LIA). The results are shown in Figures 13 and 14. To analyze the performance of our proposed model, here, we plotted the results for the time duration of simulation time t = 0 to t = T s (here, T = 50) when the algorithm has already selected MVegas for subflow with more congested path (Path 2) and MFast for subflow with less congested path (Path 1) after detecting path congestion by calculating packet retransmissions for each subflow. Now, for this time duration, the packet retransmissions are calculated again for each subflow to take decisions for the next cycle.    Figure 15 shows the congestion window graphs of MPTCP-CC-LIA. The behavior is aggressive on both subflows in slow start phase. After detecting congestion, MPTCP shifts more data to subflow with less congested path. The behavior of congestion window on less congested path (Path 1) is very aggressive, thus, increasing packet drop rate in addition to throughput. On more congested path (Path 2) the congestion window is decreased to half on packet loss detection and almost goes to zero on the next packet loss event, thus, avoiding packet drop rate but decreasing throughput also.
The aim of the coupled congestion control algorithm LIA of MPTCP is to ensure fairness in the case of a common bottleneck path shared by multiple subflows of the same multipath flow. By using the total congestion window of multiple subflows as a factor in Equations (15) and (16) of Section 3.3 to calculate the increase of congestion window of a single subflow, MPTCP has successfully achieved Goal 2 [29], i.e., a multipath TCP should not take up more capacity than a single TCP flow in the case of a shared bottleneck link and Goal 3, i.e., a multipath flow should shift its traffic to the less congested link as much as possible. But to achieve these two goals there is also a decrease in the overall throughput due to the underutilization of available bandwidth. The probability of a common bottleneck link shared by multiple subflows is very low. In the case of a shared bottleneck link, the behavior of a coupled congestion control is justified but, in the absence of a shared bottleneck, it only results in underutilization of bandwidth. less congested path (Path 1), thus, inceasing the aggregate goodput of both subflows and decreasing aggregate packet loss rate as compared with MPTCP-CC-LIA.
The congestion window of MPTCP-CC-LIA increases up to a maximum size of 100 packets on less congested path which shows its behavior is more aggressive as compared with the congestion window of our proposed MFVL scheme on less congested path which goes up to 80 only at the start and for the remaining period only has a maximum value of 75. We achieved this by using our modified version of Fast TCP on less congested path. The original version of Fast TCP is much more aggressive and unfair towards other TCP flows; however, our modified version shows its less aggressive behavior, as the results prove in the congestion window graph and this behavior shows it is not harmful to other TCP flows.  The congestion window plots of our proposed model, MPTCP-MFVL, are also shown in Figure 16. MVegas is used to control congestion window of subflow with more congested path (Path 2) and MFast is used to control congestion window of subflow on less congested path (Path 1), thus, inceasing the aggregate goodput of both subflows and decreasing aggregate packet loss rate as compared with MPTCP-CC-LIA. We used the modified version of TCP Vegas on the more congested path. The changes were made in the original TCP Vegas code to make it a bit aggressive. The congestion window graph, in Figure 16, shows that our proposed MFVL algorithm is able to use the congested path bandwidth very efficiently by not reducing the congestion window close to zero as compared with MPTCP-CC-LIA that almost reduces its window to zero on the congested link, as shown in Figure 15. Therefore, our algorithm has successfully used the less congested path for more traffic and the more congested path for less traffic, thus, achieving throughput and avoiding packet drop at the same time.

Setup Four
In our final experiment, the CBR data rate was varied from 0.1 Mbps to 2 Mbps and the results were plotted. The average goodput and average packet drop rate with CBR 2 Mbps are shown in Figures 17 and 18 respectively. For different values of CBR ranging from 0.1 Mbps to 2 Mbps, the average goodput and packet drop rate are plotted in Figure  19 and Figure 20 respectively. The results show that our proposed model MPTC-MFVL can achieve up to 30% higher average goodput and decrease average packet loss up to 50% than standard multipath TCP with coupled congestion control MPTCP-CC-LIA. Hence, in uncoupled mode, our algorithm has successfully achieved the goal of increasing goodput, decreasing packet drop, balanced congestion, and less aggressive increase of congestion windows show its harmless behavior to other TCP flows. To achieve the goal of fairness in the case of shared bottleneck, we propose to use MPTC-CC-LIA in coupled mode for our The congestion window of MPTCP-CC-LIA increases up to a maximum size of 100 packets on less congested path which shows its behavior is more aggressive as compared with the congestion window of our proposed MFVL scheme on less congested path which goes up to 80 only at the start and for the remaining period only has a maximum value of 75. We achieved this by using our modified version of Fast TCP on less congested path. The original version of Fast TCP is much more aggressive and unfair towards other TCP flows; however, our modified version shows its less aggressive behavior, as the results prove in the congestion window graph and this behavior shows it is not harmful to other TCP flows.
We used the modified version of TCP Vegas on the more congested path. The changes were made in the original TCP Vegas code to make it a bit aggressive. The congestion window graph, in Figure 16, shows that our proposed MFVL algorithm is able to use the congested path bandwidth very efficiently by not reducing the congestion window close to zero as compared with MPTCP-CC-LIA that almost reduces its window to zero on the congested link, as shown in Figure 15. Therefore, our algorithm has successfully used the less congested path for more traffic and the more congested path for less traffic, thus, achieving throughput and avoiding packet drop at the same time.

Setup Four
In our final experiment, the CBR data rate was varied from 0.1 Mbps to 2 Mbps and the results were plotted. The average goodput and average packet drop rate with CBR 2 Mbps are shown in Figures 17 and 18 respectively. For different values of CBR ranging from 0.1 Mbps to 2 Mbps, the average goodput and packet drop rate are plotted in Figures 19 and 20 respectively. The results show that our proposed model MPTC-MFVL can achieve up to 30% higher average goodput and decrease average packet loss up to 50% than standard multipath TCP with coupled congestion control MPTCP-CC-LIA. Hence, in uncoupled mode, our algorithm has successfully achieved the goal of increasing goodput, decreasing packet drop, balanced congestion, and less aggressive increase of congestion windows show its harmless behavior to other TCP flows. To achieve the goal of fairness in the case of shared bottleneck, we propose to use MPTC-CC-LIA in coupled mode for our algorithm only after detecting shared bottleneck link. The simulation results are not shown for coupled mode as we used only the existing algorithms without any modifications.

Conclusions
Modern IoT devices are equipped with high-speed processors and large on-chip memories. The technological advancements in these IoT devices and networks require the design of new intelligent algorithms and reliable protocols. With the development of new low power standards such as 802.11 ah (Wi-Fi HaLow) for the Internet of Things, now, high data rates over long distances are possible.
In this study, we have presented the case of an IoT-based smart gas grid with multihomed smart gas meters. The information sent by smart meters is used for load forecasting, therefore reliable transmission of data is required with efficient congestion control scheme. To reduce packet retransmissions and increase goodput, we proposed a novel congestion control algorithm for MPTCP. Our algorithm operates in uncoupled and coupled modes. In an uncoupled mode, modified versions of the delay-based Fast TCP and TCP Vegas congestion control algorithms are used. We made changes to the Fast TCP algorithm, so that the congestion window is updated to be less aggressive and packet drops are decreased. The modifications in TCP Vegas by introducing increasing factors for alpha and beta, result in better goodput and low packet drops over the congested path. Our algorithm detects path congestion by recording packet retransmissions for each subflow and selects the congestion control algorithm modified Fast (MFast) for less congested path and modified Vegas (MVegas) for more congested path. In simulation experiments, we have shown that MFast is better suited for less congested path and MVegas for more congested path. Our algorithm uses a shared bottleneck detection method and on detecting shared bottleneck, switches to coupled congestion mode. In coupled congestion mode, we

Conclusions
Modern IoT devices are equipped with high-speed processors and large on-chip memories. The technological advancements in these IoT devices and networks require the design of new intelligent algorithms and reliable protocols. With the development of new low power standards such as 802.11 ah (Wi-Fi HaLow) for the Internet of Things, now, high data rates over long distances are possible.
In this study, we have presented the case of an IoT-based smart gas grid with multihomed smart gas meters. The information sent by smart meters is used for load forecasting, therefore reliable transmission of data is required with efficient congestion control scheme. To reduce packet retransmissions and increase goodput, we proposed a novel congestion control algorithm for MPTCP. Our algorithm operates in uncoupled and coupled modes. In an uncoupled mode, modified versions of the delay-based Fast TCP and TCP Vegas congestion control algorithms are used. We made changes to the Fast TCP algorithm, so that the congestion window is updated to be less aggressive and packet drops are decreased. The modifications in TCP Vegas by introducing increasing factors for alpha and beta, result in better goodput and low packet drops over the congested path. Our algorithm detects path congestion by recording packet retransmissions for each subflow and selects the congestion control algorithm modified Fast (MFast) for less congested path and modified Vegas (MVegas) for more congested path. In simulation experiments, we have shown that MFast is better suited for less congested path and MVegas for more congested path. Our algorithm uses a shared bottleneck detection method and on detecting shared bottleneck, switches to coupled congestion mode. In coupled congestion mode, we used the same LIA algorithm as MPTCP. As our main contribution lies in the design of uncoupled mode for our algorithm, therefore, we compared the simulation results of our algorithm in uncoupled mode with the default MPTCP coupled congestion control algorithm LIA. The results prove that our algorithm achieves better goodput and reduces packet loss.
Packet retransmissions consume extra power; with reduced packet retransmissions our algorithm is extremely beneficial for battery powered IoT devices. For implementation on embedded platforms, we plan to analyze our design further for improvements in reducing the extra overhead associated with TCP, efficient memory utilization, and power saving by using sleep modes.