Data Freshness and End-to-End Delay in Cross-Layer Two-Tier Linear IoT Networks

The operational and technological structures of radio access networks have undergone tremendous changes in recent years. A displacement of priority from capacity–coverage optimization (to ensure data freshness) has emerged. Multiple radio access technology (multi-RAT) is a solution that addresses the exponential growth of traffic demands, providing degrees of freedom in meeting various performance goals, including energy efficiencies in IoT networks. The purpose of the present study was to investigate the possibility of leveraging multi-RAT to reduce each user’s transmission delay while preserving the requisite quality of service (QoS) and maintaining the freshness of the received information via the age of information (AoI) metric. First, we investigated the coordination between a multi-hop network and a cellular network. Each IoT device served as an information source that generated packets (transmitting them toward the base station) and a relay (for packets generated upstream). We created a queuing system that included the network and MAC layers. We propose a framework comprised of various models and tools for forecasting network performances in terms of the end-to-end delay of ongoing flows and AoI. Finally, to highlight the benefits of our framework, we performed comprehensive simulations. In discussing these numerical results, insights regarding various aspects and metrics (parameter tuning, expected QoS, and performance) are made apparent.


Introduction
One recent significant advancement of the information age is the Internet of Things (IoT), which provides convenient benefits, resulting in the widespread growth of mobile network services and the promotion of more comfortable and relevant lifestyles and facilities. However, this rapid development has resulted in a large rise in energy consumption, leading to greater greenhouse gas emissions and higher financial expenses for network operators. Energy costs associated with the operation of a cellular network now account for a sizable share of the global human energy footprint. As a result, network operators are searching for innovative ways to reduce and manage their energy footprints [1]. Overall, for a sensor network without energy recovery capabilities, energy-efficient communication technology is required for data transmission. A sensor will be unusable immediately after its battery is discharged (if no alternate power source is available). Therefore, it is crucial to understand and characterize the performances of sensor networks, especially in terms of delay and energy consumption. Ideally, a sensor network should have the longest operating life before requiring maintenance (such as a battery change). Consequently, it is necessary to operate such networks at the lowest possible energy consumption; this has been an ongoing area of research [2].

Related Work
To fully leverage multiple networks, the multi-RAT scheme has been introduced, where multiple technologies are deployed and help users deliver services appropriately. This is a promising approach that has recently received significant attention from researchers. Many publications have been devoted to the coexistence of converged and coordinated multiple RATs, in order to reduce overall network deployment complexity and costs while improving network operations maintenance requirements. Future networks are expected to support more intelligent management and integrate a range of wireless access technologies, as well as provide some degree of self-configuration, self-optimization, and self-healing [8].
Previous research in this field has mostly focused on maximizing network capacity while adhering to QoS limitations. Other research has concentrated on the resource allocation issue for parallel transmission employing several RATs [9,10]. However, the influence of delay on system performance was not included in these contributions. The fundamental issue that must be addressed is energy consumption in wireless communications. As a result, there is a rising emphasis in a range of studies on the design of energy-efficient wireless communication systems. Based on a realistic battery model, the authors of [11] present effective relay selection and energy allocation algorithms. In [12], the authors address the problem of optimal relay and RAT selection to optimize energy efficiency. Meanwhile, an energy-efficient joint radio resource management in heterogeneous multi-RAT networks is provided in [13].
Many researchers have long been interested in the capacity analysis of wireless communication networks. As far as we are aware, no current study on multi-RAT networks has examined QoS requirements in terms of throughput, reliability, end-to-end delay, and information age, while combining cellular/ad hoc network metrics and OSI model layers. Table 1 shows a comparison of the related literature and our work. Furthermore, the effect of heterogeneous networks on the age of information has not been thoroughly or appropriately investigated so far. In contrast, extensive work has been conducted on the study of ad hoc network performance metrics while taking OSI model parameters into account. In this context, the authors of [14] analyzed the end-to-end throughput behavior and stability of transmission queues in multi-hop wireless networks. Routing, random access in the MAC layer, and topology are all taken into account in their proposed model. They demonstrated that when the queues are stable, the end-to-end throughput of a given route is not impacted by a load of intermediate nodes. In [15], the authors started from the model used in [14] and studied the interaction between the PHY, MAC, and network layers. Subsequently, in [16], the authors investigated the end-to-end performance of a multi-hop wireless network for a real-time application, based on a cross-layer scheme, including the PHY, MAC, and network layers.
As mentioned in the introduction, the AoI metric has emerged as a means to assess the quality of status updates across a wireless network. According to the authors in [17], the AoI grows until a more current status update arrives at the receiver, where successful reception entails an abrupt reduction. Such a tool is applicable in applications where the maintenance of current information is crucial. Obviously enough, the time taken to propagate through the network contributes to the degree of staleness of the received updates. As a result, adequate AoI performance is achieved when status updates are provided, not only on a regular but also timely basis. The authors of [18] discuss the age minimization issue in a multi-hop network with a broad interference restriction. Among the most relevant works, authors of [19] demonstrate that the AoI may be decreased by ensuring that newer information constantly replaces older information in the transmission queue. In [20], this concept is expanded to a multi-hop scenario. The authors of [21] outline generic AoI analysis methods, then apply these AoI approaches to a variety of increasingly more complex systems, such as energy harvesting sensors broadcasting over noisy channels, parallel server systems, and queuing networks.
Many studies have been conducted on systems with time-constrained packets. The majority of them deal with the challenge of scheduling packets in order to reduce the number of packets that expire before successful transmission. When employed in the context of a wireless sensor network (WSN), deadlines have been adopted to reduce delay and energy consumption [22]. However, few publications have investigated the deployment of deadlines from the standpoint of AoI control (e.g., [23,24]).
In conclusion, we note that the current literature on AoI has focused on many distinct types of queues, each with a particular arrival and departure procedure, queue capacity, and the number of servers.

Our Main Contributions
The core contribution of this paper centers on the elaboration of a theoretical framework for the performance evaluation of a dual-RAT or two-tier network. Tier 1 consists of an ad hoc multi-hop network relying on a short-range, low-power, and low-cost RAT (possibly in an unlicensed band) such as Zigbee. Tier 2 consists of a centralized single-hop network with a star topology and relies on a longer range RAT, such as cellular 5G, LTE-M, LoRa, etc. Although other RAT options are possible as noted, the tier 2 connections will be henceforth referred to as "cellular". All nodes are members of both networks and are equipped with both RATs. The physical topology of the network is assumed to be quasi-linear (as this corresponds to many applications of interest), with the base station or data sink at one end of the chain. More specifically, a probabilistic model is developed allowing us to jointly address the ad hoc/cellular channel properties and the cross-layer modeling. Our contribution can be summarized as follows: • We build a complete framework to analyze the integration of a multi-hop wireless ad hoc network with a multi-RAT platform to optimize the energy consumption of the entire proposed system through delay minimization while ensuring data freshness, through the AoI metric. • As illustrated in Figure 1, our model can be used in different environments such as tunnels, roads, bridges, etc. • A cross-layer model is used, to replace the non-communicating layers of the OSI standard, involving synergy between network and MAC layers enabling the protocol stack to share specific information. • We use a G/G/1 and an M/G/1 queuing model to estimate the waiting time at intermediate nodes.
• We determine the optimal average end-to-end delay and age of information. These two key QoS metrics provide interesting insights on how to define the internal parameters, thus achieving optimal performance.

Paper Organization
The rest of this paper is organized as follows. Problem formulation is discussed in Section 2, average delay analysis is defined in Section 3, the steady state and expressions for performance metrics are derived in Section 4, while the performance evaluation is addressed in Section 5. Finally, the concluding remarks and future works are presented in Section 6.

Problem Formulation
In this section, we investigate the system model, including the network topology, channel model, NET/MAC cross-layer models, energy limitations, and the proposed two-tier network incorporating both multi-hop and multi-RAT aspects.

The Setting
We consider a two-tier IoT network, including a base station and a set of N = {1, 2, 3, . . . , n} IoT devices (such as sensors measuring temperature, pressure, vehicular speed, etc.), linearly distributed over the area, as shown in Figure 2. If the fraction of cellular traffic generated by node i is denoted ω i , then 1 − ω i is the corresponding fraction of ad hoc traffic. At any time and for any given packet, an IoT device must choose between (i) transmission of the packet to one of its neighbors, as a stepping stone towards the final destination and (ii) sending the packet directly to the base station (cellular network). For instance, a mobile device located far from the base station and attempting to optimize its power consumption may choose to route packets through a multi-hop sequence rather than transmitting them directly to the base station. However, a device located close to the base station may receive a high number of packets to relay to the base station and thus experience a faster battery depletion. The selection strategy in a multi-RAT context can be tuned to reduce this effect, in order to equalize energy depletion across all nodes. The goal consists in optimizing and balancing energy consumption in the network while ensuring that deadlines are met and that data freshness is maintained. This is achieved through the study of two key metrics, namely end-to-end delay and AoI. The main notations and symbols included in this article are listed in Table 2. For clarification purposes, the model assumptions are summarized below: 1.
All nodes are expected to be informed of the success or failure of their transmitted packets. In order to maintain a satisfactory level of reliability permanently, we assume that a packet is re-transmitted (if required) until success or definitive drop; 2.
It is expected that each node will have two types of packets to transmit: (1) packets generated by the device itself (queue Q i ), and (2) packets received from other neighboring devices that must be forwarded until reaching the final destination (queue F i ); 3.
A mobile is capable of transmitting on one interface and receiving on the other. However, it is not able to send an ad hoc and a cellular message on both network cards (no simultaneous transmissions, if we allow parallel selections, we will make multi-homing possible, (i.e., two simultaneous transmissions can be achieved)).

4.
It is assumed that the system is not saturated, which entails that the F i and/or Q i queue might be empty at any node i.

The Channel Model
In this paper, each IoT device serves both as a relay for forwarding data generated upstream to the next node in a multi-hop chain, and as a cellular transmitter capable of reaching the base station directly. In this context, two distinct channels must be considered, i.e., (1) the ad hoc channel, and (2) the cellular channel.

Ad Hoc Channel
The slotted-Aloha MAC scheme is assumed for all nodes in the ad hoc network, which are also assumed to be perfectly synchronized on certain time slots. Nodes send packets using the following rule. For each time slot, each node independently tosses a coin with a certain bias p known as the Aloha medium access probability (MAP). If the result is "heads", it sends the packet in that time slot, otherwise, it does not transmit [29].
We indicate by N a i the average number of transmission attempts, which can be defined as: where K a i is the maximum number of transmissions permitted by a mobile i per packet. A transfer from i is successful if neither i + 1 nor any of its neighbors N (i + 1), except i, transmits in the same time slot. The success probability ς i for a packet at node i in ad hoc network is given by: where q i indicates the attempt probability for a packet at node i.

Cellular Channel
A Rayleigh channel model is assumed for the cellular channel. The most essential and widely used measure of channel quality in a cellular network is the signal-to-interferenceplus-noise ratio (SINR).
The SINR of the IoT device i deployed in a fixed location could be written as: where P i is the transmit power of IoT device i, h i refers to her channel gain, which is assumed to follow a Rayleigh distribution and σ 2 is the variance of a Gaussian additive noise.
Here we look at the efficiency function φ(γ, L), commonly known as the packet success rate (PSR), for every user who has to send packets of L bits each to a base station is denoted as [30]: where L is the length of a given packet and ξ(γ) is the bit error rate (BER) from one user to its serving station, which depends on the SINR used. In fact, the expression of BER varies according to the coding and modulation scheme adopted by a user. Our present study is valid for all coding and modulation schemes. We denote by N c the average number of transmission attempts in a cellular network, which can be expressed as follows: we use K c i to indicate the maximum number of transmissions permitted by a mobile i per packet in a cellular network.  Success probability for a packet at node i in ad hoc network q i Attempt probability for a packet at node i Efficiency function R i Transmission rate (in bps) L Packet length (in bits) π F i Probability that queue F i has a packet placed at the head of the line

Cross-Layer Architecture
Here, a cross-layer architecture is proposed, which takes into account both the network and MAC layer parameters (see Figure 3). Thus, communication and information sharing between separate layers become more efficient and flexible, and offer the possibility of global optimization.
The network layer comes first in our cross-layer architecture. It is responsible for defining the source and destination of packets and routing them through the sensor network. It manages two queues: (1) the forwarding queue F i , and (2) the queue Q i . Queues in the system are assumed to operate with infinite storage capacity, thus avoiding packet loss by overflow. A scheduling method such as first in first out (FIFO) is considered. In addition, a weighted fair queuing (WFQ) is used in the network layer for managing the data transmitted over each cycle. This scheme offers some flexibility and allows QoS support and packet prioritization.

Own Queue Q i
This queue handles packets generated by node i itself (sensed data in the case of a sensor network), which are transmitted to their final destination (base station) through the network of neighboring sensors, or through the cellular network directly, modeled as an M/G/1 queue, where node i opts to transmit from Q i with probability 1 − f i .

Forwarding Queue F i
This queue contains packets from other nodes to be forwarded to the base station through one or several hops. It is modeled as a G/G/1 queue, where node i decides to forward from F i with probability f i . Thanks to this configuration, the nodes benefit from a certain flexibility allowing them to manage the packets transmitted by each node differently from their own packets.
The MAC layer establishes the communication media sharing rules for the different IoT devices in the network. Here, we consider a slotted-Aloha MAC protocol. Prior to any transmission attempt, a queue, either F i or Q i , is selected. At the beginning of each time slot, a node attempts to gain channel access with random probability q i . Then, the head packet from the selected queue is moved from the network layer to the MAC layer where it is transmitted and retransmitted if required, until successful delivery or final drop.

Multi-RAT Support
As mentioned above [4], in the presence of multiple radio interfaces, IoT devices are assumed to be equipped with more than one radio interface and to select the "best" one based on multiple parameters such as user requirements, network capabilities, application properties, etc. In general, every interface has a specific range and cost (energy, economic issues, etc.). A major challenge in multi-RAT networks consists in dynamically selecting the most appropriate RAT in order to address performance goals, such as energy efficiency. Accordingly, an efficient model must be integrated at this decision stage to avoid unnecessary transfer between RATs [31].

Energy Limitation
A major concern in sensor network applications is the capability of operating at ultrahigh energy efficiency. Nodes will shut down once their battery is discharged since there is no possibility of recharging them. Indeed, it is assumed that the nodes have no alternate power source such as harvesting, power line, etc. The deployed network must ensure that connectivity is maintained as long as possible, which raises the issue of balancing energy consumption across all nodes. If all nodes in the network consume energy at approximately the same rate, the more central nodes will remain operational and provide forwarding connectivity for a longer time. This leads to more progressive and graceful degradation of the network operation.

Routing within a Two-Tier Network
Our proposed architecture includes two tiers: (1)-the first tier is the proposed multihop sensor network, while (2)-the second tier consists of a cellular network.

Tier I: Multi-Hop Network
Sensors are presented as relay nodes, which receive/forward messages from/to their neighbors. We assume static routing, where the IoT device i forwards its packets to the mobile device i + 1 along a routing chain until the node responsible for relaying to the base station is reached. It is noteworthy that such a multi-hop scheme embodies many well-known benefits, in terms of QoS, generally lower transmission cost, better energy efficiency, longer device lifetime, improved spectrum efficiency/utilization, and higher self-organization capability.

Tier II: Cellular network
Once the packet reaches the sensor node responsible for sending the data to the base station, it is transmitted over the cellular network. We use a multi-RAT network, in which different wireless technologies are combined via separate reliable links (e.g., 5G, LoRa, Sigfox, etc.) for data transfer to the base station.
Finally, these two architectures are unified into a two-tier system to provide an efficient data transfer infrastructure in terms of delay incurred and throughput provided. Figure 3 depicts an organizational chart that is used to fully understand the connection between the NET and MAC layers for the two-tier IoT network. It is worth mentioning that a transmission cycle comprises a number of time slots that either result in a successful transmission or a failure/drop.

Average Delay Analysis
Now, we focus our study on the delay, which is a performance metric corresponding to the time needed for a packet to move from source node s to the base station, by going either through the multi-hop route or directly through the cellular uplink. We first derive an expression for the entire network, then compute the delay for the cellular and multi-hop sub-systems. Finally, we estimate the arrival and departure rates of our queuing model.
Let D i,j be the cumulative delay that a packet experiences from the moment it is queued at node i to the moment it is transmitted over the cellular network by node j, given by: For simplification purposes, this paper will only take into account the waiting time and the transmission time, given that both processing time D Proc k and propagation time D Prop k are negligible. Each packet in the F i or Q i queue, on its way to its neighbor j, has to wait for a certain average time called waiting time (W F i for queue F i and W Q i for Q i ). Then, in order to complete its transmission, it is directed to the second neighbor, with an ad hoc network service time t a i , until node j is reached, then it will be transferred to the final destination via the cellular uplink, with a service time corresponding to t c i . Figure 4 illustrates the expected end-to-end delay in the entire network. D i,j can be written as follows: Figure 4. Expected end-to-end delay over two-tier IoT network.
The delay D i experienced at each mobile device i is obtained as: where ϕ(i, j) is the probability of sending packets over the multi-hop network from node i to node j, where the latter will forward the packet to the base station via the cellular uplink, given by: where ω j denotes the fraction of cellular traffic sent by node j.
The average delay generated is obtained as follows: where φ i is the fraction of the total load contributed by node i, expressed as follows: Next, we will determine t c i , t a i , W F i and W Q i .

Delay over Cellular Sub-System
Our heterogeneous environment makes multi-RAT systems suitable, where each RAT operates independently from others. Given the possible radio technologies for the tier 2 subsystem, some (e.g., 5G, 6G, NB-IoT,. . . ) are characterized by a deterministic multiple access channel, while others (e.g., LoRa, Sigfox, . . . ) rely on contention to gain access to a shared medium.
The random variable t c i corresponds to the average packet transmission time of user i for tier 2, given by: where incoming packets are transmitted by user i at a rate R i (in bps). We use π F i (resp. π Q i ) to indicate the probability that queue F i (resp. Q i ) has a packet ready to be transmitted. Moreover, let π F i,s be the probability that queue F i has a packet ready to be forwarded. Thus, we have:

Delay over Multi-Hop Sub-System
We use t a i to represent the average packet transmission time for the ad hoc network (tier 1) at node i, given by:

Waiting Time
The waiting time in queue F i (resp. Q i ) is composed of two elements: (1) the queuing time B F i (resp. B Q i ); and (2) the mean residual service time R F i (resp. R Q i ). The latter is divided into two terms: (1) the mean residual service time of a tier 2 packet in service R c i ; (2) the mean residual service time of an ad hoc (tier 1) packet in service R a i . The average waiting time at node i in queue F i (resp. Q i ) is defined as: 3.3.1. Mean residual service time at node i: Any arriving packet should wait until the packet in service is delivered. The latter can be a packet from the F i queue or a packet from the Q i queue, destined directly to the base station (tier 2 network) or next neighbor (j) (ad hoc/tier 1 network). The average residual service time observed by a given packet in F i or Q i is denoted: Leveraging renewal theory and the method presented in [16], it can be shown that the mean residual service time in F i for cellular network R c,F i and the mean residual service time in F i for ad hoc network R a,F i (resp. R c,Q i and R a,Q i ) can be expressed as follows: Queue F: Queue Q: where t a(2) i and t c(2) i designate the second moment of the service time for the ad hoc and cellular network, respectively, given by [4]: Random access through contention (LoRa, Sigfox, . . . ),

Queuing time at node i:
Once a packet enters the forwarding queue (resp. its own queue), it must wait until the available remaining packets are served before being processed. Once at the head of the forwarding queue (resp. its own queue), it must wait for the packets that will be served before it from its own queue m Q i (resp. the forwarding queue m F i ). The queuing time in forwarding queue B F i (resp. its own queue B Q i ) can, therefore, be written as: where m F i is the number of previously entered packets waiting in the forwarding queue. A packet at the top of the forwarding queue (resp. its own queue) ready for transmission must wait a certain number of cycles X (random variable) before it can move to the MAC layer. X corresponds to the number of cycles required to serve packets from Q i (resp. from F i ). The probability of waiting k cycles is P[ The waiting time at node i in queue F i (resp. Q i ) is obtained by using Equation (15) (resp. (16)) and (25) (resp. (26)) as specified below:

Outer Flow
This performance metric measures the rate (per time slot) at which packets are withdrawn from the queues after either a successful transmission or a drop. We identify two independent departure flows: (1) The first flow comprised of packets removed from queue Q i , and denoted and (2) The second flow comprised of packets removed from queue F i , given by: The departure rate d F i experienced at each mobile device i is expressed as

Inner Flow
The inner flow is defined as the rate per time slot at which packets arrive at the queues. We identify two independent arrival flows, with (1) the first flow being composed of packets generated by IoT device i. Let us assume that packets destined for user i will be served with a Poisson distribution whose parameter λ Q i corresponds to the average packet arrival rate in its own queue Q i , where any packet is composed of L bits. The resulting source rate (in bits/second) is therefore indicated by Lλ Q i . The second flow (2) is composed of packets from another neighbor, transmitted via the multi-hop channel. Here, IoT device i acts as a cooperative relay to transmit data packets to node j responsible for transferring data to the base station.
λ F i,s denotes the average packet arrival rate in forwarding queue F i of node i from a source s, and is expressed as: Proof. Here we sketch a simplified proof using events decomposition. Let us consider events A and B as follows: • Event A: Traffic generated by mobile device i has departed from queue Q s ; • Event B: All transmissions over successive hops from mobile device s to node i have been successfully achieved.
We can easily verify that: As a result, the arrival rate can be expressed as which completes the proof. The total arrival rate at node i is then given by:

Steady State
We estimate in this section the performance metrics in terms of throughput, delay and AoI under steady-state conditions.
In the steady state, the long-term arrival rate is equal to the long-term departure rate. This corresponds to the rate balance Equation (RBE). Thus the F i (resp. Q i ) queue is stable if its departure rate is at least equal to its arrival rate.
It is written: Indeed, given that we have defined the last two metrics, it is possible to determine the expression of the average load π F i,s (resp. π Q i ) at each mobile device i and for each queue. The RBE results in a linear system, where the queuing system load of F i , denoted π F = (π F 1,s , π F 2,s , · · · , π F i,s ), is given by: where G is an I × I matrix and A is a column vector with the dimensionality I × 1.
we obtain: The queuing system load of Q i noted π Q = (π Q 1 , π Q 2 , · · · , π Q i ) and given by: where O is a I × I matrix and Y is a column vector with dimensionality I × 1.
we obtain:

Age of Information
We are interested in applications where the objective is to continuously communicate the most recently updated state of a time-varying process to a given monitor. As an example, a device sends packets containing a certain state (e.g., sensor data, a list of neighboring nodes) to a network manager on a regular basis to keep the state tracked by the network manager relatively fresh at all times. IoT devices attempt to report their status to the receiver side as soon as possible. The recently proposed AoI metric measures the timeliness and freshness of status updates from various IoT devices at the destination node. It is assumed that time is divided into equal-length slots, and each status update packet is transmitted using exactly one time slot.
In Figure 5, we show the evolution of AoI A i (t) over time where A k indicates the kth peak age, dropping points correspond to the instants when an update packet is received, resulting in a lower age value (i.e., the current time minus the generation time of the new update packet). We can observe from this figure that the AoI is linearly increasing over time and decreasing in case the packets are successfully received. Given A i (t), the average age of device i can be defined as follows: Nevertheless, the AoI metric is difficult to analyze. Furthermore, in many systems, it is often the peak state information delay that determines the performance loss. Accordingly, we focus instead on the average peak state age (PAoI) representing the maximum age of the information before a new update is received. As defined in [32], and for a given queuing model, the generalization of PAoI is given by: where V i denotes the inter-arrival time of packets for mobile device i, given by: For our model, the peak age of information (PAoI) for a packet in the F i (resp. Q i ) queue from source node s to a given neighbor i is given by: The PAoI obtained at each mobile device i is expressed as follows: Packets received from neighbors + Own packets

Performance Evaluation
This section examines the behavior of the end-to-end delay and the age of information when the fundamental parameters ( f i , q i , ω i , λ Q i ) change. For illustrative purposes, a network of four sensor nodes (n = 4) and one base station is considered.
The simulation was carried out under three different scenarios, using MathWorks Matlab R2022b:

1.
Setting 1: For this first instance, we assumed that the first three nodes (i = 1, 2, 3) have the same fraction of cellular traffic (ω i = 0.5), and the node closest to the base station (i = 4) needs to relay received packets to the base station, hence we would always retain ω 4 = 1 for all subsequent cases.

2.
Setting 2: In the second scenario, the node farthest from the base station (i = 1) sends all of its packets directly to the base station (ω 1 = 1), the second node (i = 2) sends 75% (ω 2 = 0.75) of its packets to the base station and the rest (25% of its packets) to the neighboring node, and third node (i = 3) sends 50% (ω 3 = 0.5) of its packets directly to the base station and the rest to the neighboring node. Finally, the last node (i = 4) always delivers data with (ω 4 = 1) to the base station.

3.
Setting 3: For the final case, consider that the node closest to the base station (i = 1) sends 25% of its packets to the base station (ω 1 = 0.25), the second node (i = 2) sends 50% (ω 2 = 0.5) of its packets to the base station and the rest to the neighboring node, and the third node (i = 3) sends 75% (ω 3 = 0.75) of its packets to the base station and the rest to the neighboring node. Finally, the last node (i = 4) always provides data to the base station with (ω 4 = 1).
It is worth noting that f 1 = 0 as IoT sensor 1 has no predecessor sensor.

Forwarding Probability f i
Figures 6-8 depict the delay experienced by each mobile device as the forwarding probability changes with bit error rate in the first, second, and third cases, respectively. We can clearly see that using a bad channel ((a), ξ(γ i ) = 10 −1 ) implies a very high delay value, progressing to a lower delay value for a fair channel ((b), ξ(γ i ) = 10 −2 ), and finally reaching a minimal delay by using a good channel ((c),ξ(γ i ) = 10 −6 ). We also notice that the first node does not experience any change in the delay value (a stable delay) because it always has a zero forwarding probability, whereas for the other nodes, the closer we are to the base station, the greater the delay, and an increase in forwarding probability has a direct influence on the obtained latency. This is to be expected, since transmission to neighboring nodes (through the ad hoc network) is preferred, thus nodes closest to the base station will obtain more data packets to transfer. For the third scenario, we notice that node 3 has the longest delay for a good channel, which can be explained by the fact that nodes 1 and 2 transfer more packets to the latter since w 1 = 0.25 and w 2 = 0.5, and also by the fact that node 3 uses the cellular network (w 3 = 0.75) for the transmission of most of the packets received, which further increases the delay.          Figures 9-11 demonstrate the delay encountered by each mobile device as a function of the forwarding probability when the arrival rate in the queue varies for the first, second, and third scenarios. We notice that for heavy traffic in scenario 2 (Figure 10a, λ Q i = 1280 bits/s), the delay rises slowly with the increase of the forwarding probability until a maximum is reached when the forwarding probability is close to 1. However, for moderate ( Figure 10b, λ Q i = 128 bits/s) and low traffic (Figure 10c, λ Q i = 12.8 bits/s), the delay starts to rise sharply for f i > 0.7. Furthermore, while the behavior is approximately the same for Figure 10b,c, the curves are slightly lower in the low-traffic case, and in both cases are considerably lower than in Figure 10a. Indeed, as the traffic is reduced, the waiting time in the forwarding queue is likewise reduced. It is noteworthy that in scenarios 1 and 3, the delay curves are nearly the same shape for all three traffic levels, with very slight improvement at reduced traffic.

Attempt Probability q i
Figures 12-14 depict how latency varies as a function of attempt probability when the bit error rate changes in the first, second, and third scenarios, respectively. In the first and second scenarios with a bad channel ((a), ξ(γ i ) = 10 −1 ), we can observe that the delay is unstable, reaching very high values of up to 6 × 10 12 s. However, in scenario 3, the delay is more steady, reaching a minimum when the attempt probability is between 0.2 and 0.5.
For the fair ((b), ξ(γ i ) = 10 −2 ) and good ((c), ξ(γ i ) = 10 −6 ) channels, we can see that for very small values of the attempt probability, the system is unstable. Then, for an attempt probability between 0.1 and 0.5, we have a minimal delay. For a value greater than 0.5, the delay begins to increase, which is quite normal given that the system relies heavily on the ad hoc network. We also notice that the node closest to the base station suffers a greater transmission delay than the other nodes, which can be explained by the fact that it receives several packets to transmit, which results in congestion of the queue, and therefore an increase in the transmission delay. For scenario 2, node 1 delivers all of its packets directly to the base station (w 1 = 0), which explains its stable latency irrespective of the value of q i . In scenario 3, node 3 has the longest delay since it obtains more packets from neighboring nodes (w 1 = 0.25 and w 2 = 0.5), but node 1 sends most of its packets across the ad hoc network (w 1 = 0.25), thus, the change in q i has a significant effect on its transmission delay.  (c) Good channel, ξ(γ i ) = 10 −6 .    Figure 15 demonstrates the delay encountered by each mobile device as a function of the attempt probability when the arrival rate in the queue is varied (heavy traffic, λ Q i = 1280 bits/s, moderate traffic, λ Q i = 128 bits/s, low traffic, λ Q i = 12.8 bits/s) for the first, second, and third scenarios. We conclude that the fluctuation of q i is unaffected by traffic density since the transmission delay is the same for all traffic categories in all three scenarios.

Fraction of Cellular
Traffic ω i Figure 16 demonstrates how delay changes as a function of the fraction of cellular traffic in the first scenario when the bit error rate varies. We can see that the higher the proportion of cellular traffic, the lower the delay for a good channel. Node 4 (the nearest to the base station) always uses a value of w 4 = 1, as it only has one option (transmit the packets directly to the base station). The delay is plotted as a function of the fraction of cellular traffic for various regimes in Figure 17. The three subfigures are practically identical, despite the change in the arrival flow in the queue.

Arrival Rate in OWN queue λ Q i
We turn now to plot the delay versus the arrival rate for the three scenarios (see Figure 18). For the good channel case, it is shown that the delay increases with an increasing arrival rate for all scenarios. It is apparent that the first node is almost stable, this is because it only carried its own packets.

AoI Simulation
This section examines the behavior of the AoI metric when the fundamental parameters Figures 19-21 depict the AoI as a function of forwarding probability for three values of the bit error rate, and for scenarios 1, 2, and 3. It is obvious that when the forwarding probability is too high, the system suffers from a high AoI and, thus, the AoI per node is high, explaining that an arriving packet cannot be forwarded immediately, due to both a busy MAC layer as well as other packets having priority in the queue. Moreover, passing through multiple nodes (ad hoc network) automatically implies a high AoI.      (c) Good channel, ξ(γ i ) = 10 −6 . Figures 22-24 depict the AoI as a function of forwarding probability for all three scenarios when the arrival rate in the queue is varied. As observed before, the AoI rises with forwarding probability, and rises all the more quickly the further upstream is the node in the chain (while node 1 has a constant AoI regardless of f i ). It is noteworthy that the load on the Q queue load has little influence on AoI, while the F queue load has a significant impact.       Figures 25-27 show the effect of attempt probability on AoI when the bit error rate is adjusted for the three proposed scenarios. When the attempt probability is too low, the system becomes unstable, and the AoI begins to decrease as the attempt probability decreases, reaching a minimum value when q i is between 0.2 and 0.4. As the attempt probability increases, the queues become more congested, and the AoI increases. Because of the greater attempt probability, more packets will compete for transmission via the ad hoc network. This will tend to overload forwarding queues. Thus, the node farthest from the base station has the largest AoI compared to the node closest to the base station. Since packets in the farthest nodes looking to reach the base station as the target must spend time waiting in each node along the path, and for scenario 2, the first node with W 1 = 1 retains a stable AoI.         Next, Figures 28 and 29 plot the AoI as a function of attempt probability for various traffic regimes in all three scenarios. Again, it can be observed that the arrival rate in the queue has little impact on the AoI. For scenarios 1 and 3, the subfigures are practically identical, despite the change in the arrival flow in the queue.

Fraction of Cellular Traffic ω i
For the first scenario, the AoI is plotted as a function of the fraction of cellular traffic in Figure 30. We demonstrate that for a good channel as the fraction of cellular traffic rises, the AoI decreases significantly until it achieves a minimum for ω i = 0.9. This decrease in AoI is justified by the fact that nodes send data directly to the base station, implying that packets arrive at their destination faster. Furthermore, the forwarding probability for nodes 1, 2, and 3 is 0.5, resulting in a congested forwarding queue at node 4, explaining its position as the node with the highest AoI.  Next, the AoI is plotted as a function of the fraction of cellular traffic for different values of λ Q i in Figure 31. It appears that the system maintains the same behavior regardless of the rate of arrival in the queue Q.

Arrival Rate in its Own Queue λ Q i
Finally, the AoI is plotted as a function of the arrival rate in queue Q, for all three scenarios in Figure 32. In all cases, the AoI is extremely high at low arrival rates. This is because such low levels of traffic imply insufficient status updates at the base station. As λ Q i is allowed to increase, the AoI then drops at all nodes until it reaches a minimum value, then rises again, as the queues start to fill up and the system moves toward saturation. The AoI also increases according to the distance of a node from the base station, as this relates to the number of required hops to reach it. Arrival rate i Q (Bits/s) 10

Concluding Remarks
Multi-hop networks promise to efficiently collect data from IoT devices deployed in a target area as well as relay their data to legacy systems, such as cellular networks. In this paper, we propose a comprehensive theoretical framework for analyzing and understanding the dynamics of such a network. The suggested model is intended to assist mostly in the planning and sizing of an IoT network, as a means of ensuring target/satisfactory performance and effective deployment. We provide a queuing-theory-based model that allows for cross-layered optimization across the APP, NET, MAC, and PHY layers. The suggested model was evaluated using a discrete-event simulation, and it accurately predicts network performance. Our model can measure E2E delay and AoI, making it an excellent choice for evaluating the freshness of information for active streams. It is necessary to examine the impact of forwarding probability, attempt probability, a fraction of cellular traffic, the arrival rate in the queue, and other parameters. The determination of the stability region as a function of these factors constitutes an end result of interest. Many trade-offs have been outlined, as well as a thorough discussion of parameter tuning and network design. This article opens up the way for many exciting areas, including network design and optimal configuration, energy efficiency, wireless energy transfer, flexible infrastructure, etc. Future extensions of this work will examine energy consumption measures to adjust network parameters in order to ensure limited and/or balanced energy consumption.