Most of the cross-layer methods are usually focused on one layer interacting with other layers to optimize the work of the layer being focused on.
3.1.2. Channel Layer
Cross-layer methods concentrated on the channel layer consist of resource reservation, random access, and cooperative transmission methods.
In the works focused on the resource reservation at the channel layer (
Table 3), the channel layer rarely uses physical layer data, using mainly network layer data [
47]. The network layer reports on the data streams passing through the nodes. On the basis of this data, the channel layer makes the decision to reserve resources. When splitting the network into clusters, route data helps to maximize the number of dedicated channels for intra-cluster communication, and in the case of a tree-like network, where nodes transmit data to a single data collector node, the route tree allows the allocation of disjoint channel resources between the previous and next link. The resource reservation by a channel layer based on the information from the network layer allows for increased network capacity, route stability, and reduced power consumption.
Gong et al. [
48] propose the allocation of node channels using a routing protocol in such a way as to minimize interference of nodes along the route. The proposed method for allocating channels increases the bandwidth in comparison with the allocation of channels based on local node information.
In their subsequent work, Gong et al. [
49] suggest reserving channels using information about network layer routes. The proposed protocol uses fewer service messages and provides more bandwidth than using only channel layer information.
De Renesse et al. [
50] suggest that the channel layer should use the network layer information about the routes passing through the node and, on the basis of this information, pre-reserve bandwidth, minimizing the probability of packet collision. The network layer, in turn, searches for routes based on available reserved bandwidth.
Mansoor et al. [
47] divide ad hoc networks into clusters; clusters are formed by the nodes themselves using a distributed algorithm. The access time in each cluster is represented as a synchronized sequence of super-frames. The superframe consists of four periods: the beacon period, the spectrum sensing period, the period of detection of adjacent nodes, and the period of data transmission. The cluster head in the beacon period sends information about synchronization and allocated cluster resources. In this work, the clustering is carried out in such a way as to simultaneously minimize the number of clusters and allocate the maximum number of free frequency channels for intra-cluster communication. For the formation of clusters and allocation of channels, joint work of the physical and channel layers is used. The routing algorithm, on the other hand, prefers the nodes that are the heads of the clusters to be included in the route when routes are selected, as the links with them are more stable. In the ad hoc network in question, there are the main users behind whom a certain channel resource is assigned and secondary users who, having found that the resource is not in use, can use it. When secondary users discover that the primary user has started using their resource, the secondary user must switch to another free resource; the switching takes time. The routing protocol looks for paths with the least delay. The delay is evaluated based on data from the link layer. The delay consists of a delay in the queue, a delay in switching to a new channel resource, and a delay in waiting time for the recharge of the resource.
Raj et al. [
51] proposed a cross-layer channel assignment and routing algorithm for a cognitive radio ad hoc network that provides stable paths. A novel metric for the probability of successful transmission of a channel is derived by considering channel statistics, sensing periodicity of the secondary users, time-varying channel quality, and a time-varying set of idle channels from the physical layer. Furthermore, the number of available channels and their characteristics are considered for the selection of a next-hop node with an aim to avoid the one-hop neighbors located in the primary user transmission range. The primary user is the user with granted resources, which are not used all the time. Secondary users attempt to use the resources of primary users when resources are free.
In their subsequent work, Mansoor et al. [
52] offer a cross-layer approach that minimizes power consumption in UAV networks. The approach uses network and channel layers. The network layer uses the AODV protocol for routing, the cluster head is selected by the GSO (Glow Swarm optimization) protocol, and the Cooperative MAC protocol is used at the channel layer. The network has a cluster head. Using AODV, the cluster head finds routes to all nodes and creates a node tree. The cluster head then assigns TDMA slots to other nodes along the node tree in such a way as to minimize packet interference and collisions, which results in lower power consumption, as fewer packets are sent repetitively.
In work focused on optimizing channel random access (
Table 4), the channel layer uses data from the physical and network layers. The physical layer evaluates the state of the channel and reports to the channel layer. If the channel state is poor, the data link layer does not transmit the packet, as the packet is likely to be corrupted, further interfering with other users. The channel layer can also indicate the physical layer at what power the packet should be transmitted, making a trade-off between the high probability of successful packet delivery and the high interference for the other users.
Toumpis et al. [
53] analyze the mutual influence of power control, queue types, routing protocols, and channel access protocols. The authors found that the effectiveness of the CSMA/CA channel layer protocol is highly dependent on the routing protocol, and the proposed two channel-layer protocols are superior to CSMA/CA. One of the protocols performs joint channel access control and power control, ensuring higher throughput and energy efficiency. The second protocol sacrifices energy efficiency in exchange for more bandwidth.
Pham et al. [
54] propose a cross-layer approach. The physical layer in that approach shares the channel state with the other layers. If the channel state is bad, the upper layers decide not to transmit a packet, as this will result in energy consumption, interference, and waiting for packet reception confirmation reply. The approach is applicable to the Rayleigh fading channel. The level of future Rayleigh channel fading can be predicted based on previous measurements. The authors developed an attenuation prediction algorithm based on the Markov chains and examined in detail the interaction of the physical and channel layers. The authors derived theoretical formulas for bandwidth, packet processing speed, probability of packet loss and average packet delay in Rayleigh channels. The proposed approach increases network capacity and reduces energy consumption.
In cooperative transmission-oriented works (
Table 5), the channel layer interacts with the network and physical layer. Cooperative transmission is based on the fact that when the sending node sends a data packet, the packet is received not only by the receiving node but also by several other nodes (other nodes are usually called auxiliary nodes). The receiving node sends an acknowledgment packet. If the acknowledgment packet has not been sent, one of the nodes that received the data packet and whose communication channel is less attenuated sends the packet to the recipient. The difficulty of cooperative transmission is the correct formation of cooperative transmission groups and the choice of when cooperative transmission is more efficient and when cooperative transmission causes interference and is less reliable than ordinary transmission. Interaction with the physical layer in the cooperative transmission is used to obtain information about channel fading and to set signal-code structures for packet transmission from the sending node and auxiliary node. The network layer provides information about the direction of data flows. With this information, groups of nodes can be effectively selected for cooperative transmission.
Dai et al. [
55] propose the interaction of channel and physical layer in cooperative transmission. In this work, nodes are expected to possess orthogonal communication channels (frequency, time slot, etc). When a packet is delivered, it is received by the recipient node and nodes that can participate in the cooperative transmission. The authors suggested the use of automatic packet forwarding by auxiliary nodes if they did not receive a confirmation packet after some time. Only nodes with the same communication channels as the recipient node are used for cooperative transmission forwarding.
Liu et al. [
56] propose a protocol where, at the beginning, the sender node codes the packet encoded with error-correcting code so that half of the packet bits can be repaired. The sender node then sends half of the packet to the helper node and the other half to the recipient node. The helper node restores the second half of the packet, encodes again and sends it to the recipient node. By splitting the packets into two halves, the packet halves can be transmitted at different speeds depending on the channel state. For cooperative transmission, the channel layer uses physical layer information about the channel state and available signal-code structures for packet transmission.
Dong et al. [
57] propose cooperative beamforming (virtual antenna with the help of multiple nodes). For cooperative beamforming, the physical and channel layers of the nodes interact. Nodes in the neighborhood use time division multiple access (TDMA) to broadcast data packets. Then, nodes that participate in the cooperative beamforming transmit the packets to recipients. To form a virtual antenna, node coordinates and time synchronization are required.
Aguilar et al. [
58] make a channel layer to use information about signal strength from the physical layer to choose whether to use cooperative transmission or not. Cooperative transmission groups are formed based on route date from the network layer.
Ding et al. [
59] use network layer data about routes to form cooperative transmission groups. At the same time, at the physical layer, the control of transmission power is optimized based on data from the channel layer about cooperative transmission groups.
Wang et al. [
60] propose a cooperative transmission using network packet coding. When the helper node re-sends a data packet to a recipient, it cannot send its own packets. To solve this problem, the authors suggested using network packet coding. The helper node creates a modified packet based on its own packet and the packet intended for the recipient node. The receiving node, based on a corrupted packet from the sending node and a modified packet from the helper node, recovers both the packet from the sender and the packet from the helper node. As a result, network bandwidth is increased, and latency is reduced.
3.1.3. Network Layer
Cross-layer methods concentrated on the network layer consist of cross-layer routing protocols and cross-layer routing metrics, which can be used to turn a one-layer routing protocol into a cross-layer routing protocol.
Cross-layer routing metrics are composite metrics based on metrics collected from multiple layers (
Table 6). Cross-layer routing metrics are used for existing routing protocols, turning routing protocols into cross-layer routing protocols.
Park et al. [
61] propose a route reliability metric as the probability of successful packet delivery across the route. Successful packet delivery rate calculation is based on the received packets’ signal-to-noise ratio from the physical layer.
Draves et al. [
62] propose two metrics: the metric of expected transmission time (ETT) and the weighted cumulative expected transmission time (WCETT). The ETT metric improves the expected transmission count (ETX) metric [
74] (ETX is equal to the sum of hops, including retransmissions) by measuring the transmission speed. The ETT metric is equal to the ETX metric multiplied by the ratio of the packet size to the transmission speed. The WCETT metric improves the ETT metric by giving a lower score to routes in which subsequent lines use the same channel resources (data about channel resources is gained from the channel layer) to reduce user interference. However, this metric does not account for interference from other routes.
Yang et al. [
63] propose the metric of interference and channel-switching (MIC). The MIC metric improves the WCETT metric by accounting for interference from other routes. The metric uses the number of adjacent nodes for each line of the route to evaluate interference from other routes.
The standard 802.11 s [
64] uses the ALM metric (Airtime Link Metric). This metric measures the time required to deliver a packet of 8224 bits between the sender and receiver. To obtain this metric, the network layer uses the following data from the channel layer: number of bits in the frame, line transmission speed, probability of frame error, channel access delay, and frame service field size.
Ramachandran et al. [
65] propose the metrics for estimating the probability of route packet loss, the probability of line packet loss, and the remaining battery charge of the node from the power of the received signal.
Oh et al. [
66] propose a metric defined as the ratio of the time taken to transmit a packet to the sum of the transmission time and the time taken to wait for the channel to be free. If this metric is small, the channel is less congested, and the line has a low transmission delay; in this case, the line is more suitable for real-time applications.
Hieu et al. [
67] offer a route metric for ad hoc networks with radio links with multiple transmission rates. The metric takes into account the maximum allowable transmission rate and the number of packet retransmissions. The selection of the lowest metric route ensures a high bandwidth route and prevents the selection of low bandwidth routes. A small metric value means that the route has a few intermediate nodes. The metric is based on the data of a number of packet retransmissions from the channel layer and the available line transmission speeds from the physical layer.
Katsaros et al. [
34] use a weighted sequence of the signal-to-noise ratio and packet loss probability data from the channel layer to estimate line reliability.
Mucchi et al. [
68] propose a metric that takes into account the mobility of nodes, the line load, and the fading of radio lines. This metric is called the network intersection metric. This metric uses the average packet wait time in the queue, the average packet transmission time and the packet loss probability estimate, which is based on the signal-to-noise ratio, packet length, modulation, and error-correcting code.
Ahmed et al. [
69] use a metric for line reliability estimation based on queue size, number of hops, distance, and relative speed from the sender node to the receiving node.
Bhatia et al. [
70] offer a path stability metric based on queue length, the signal-to-noise ratio, and the probability of packet loss. The queue size assesses the probability of packet drop due to line buffer overflow, the probability of packet loss estimates interference and the signal-to-noise ratio estimates distance and node mobility (if the node moves far away, the signal-to-noise ratio drops). By combining the three metrics into one, one can estimate the stability of the lines and the stability of the routes.
Zent et al. [
71] present a metric based on delay, bandwidth, and geographic direction.
Jagadeesan et al. [
72] propose a metric based on the residual energy of the node and channel occupancy.
Qureshi et al. [
73] have derived a theoretical estimate of route energy consumption based on the probability of packet loss, the number of retransmissions and the number of route hops. The authors have created an algorithm for calculating transmission power for all route nodes in order to minimize the energy spent with a given probability of package delivery.
The single-layer routing protocol in ad hoc networks, using probing packets, can only receive information about the nodes through which the probing packet passed and the packet travel time that estimates the delay of the found route (
Table 7). As a result, the routing protocol can select routes by delay and hop number. Cross-layer routing protocols can use the cross-layer metrics described above or collect information from different layers.
Canales et al. [
75] offer a routing protocol that uses data about free time slots from the channel layer to reduce interference. When sending a route request packet, it is checked on each hop whether it is possible to reserve non-interfering time slots on previous and next hops; if it’s not possible, then the request packet is dropped.
Amel et al. [
76] propose a modification of the AODV routing protocol, where instead of the hop number metric, the delivery probability is estimated based on the signal-to-noise ratio obtained from the physical layer.
Chen et al. [
77] suggest an improvement to the AOMDV routing protocol. The new routing protocol uses a weighted function from the queue size of the line buffer and the residual energy of the node to find routes.
Madhanmohan et al. [
78] propose a modification of the AODV protocol, in which a route request is sent with information about how much power is needed to transmit the packet alongside the route. If one of the route lines does not support the specified power, the route request packet through that line is not set.
Weng et al. [
79] propose a cross-layer protocol of channel and network layer. The new protocol minimizes the energy consumption in the network. The routing protocol uses data from the channel layer and finds paths with minimal power consumption for packet transmission. The energy consumption is estimated based on the probability of packet line successful delivery, the number of packet collisions, the residual energy of the nodes, and the number of packets transferred. The channel layer protocol gives a larger share of the channel to those nodes that are more likely to successfully transmit packets.
Attada et al. [
80] propose an improvement of the DYMO routing protocol based on the channel resources reservation. When reserving resources, physical layer parameters such as transmission power, transmission speed, signal constellation size, and error code type are requested. By reserving resources on the basis of physical layer parameters rather than just the capacity of the line, the network capacity increases.
Ahmed et al. [
81] propose to use the signal-to-noise ratio. When one selects the next node to forward the route request message, only nodes with the minimum allowable signal-to-noise ratio are selected. The high signal-to-noise ratio implies that the node is close and will not be out of radio sight for a relatively long time.
Chander et al. [
82] offer a multicast routing protocol. The protocol uses physical, channel, and network layer information to form a multicast tree. The following data is used: signal strength, fading estimation, line lifetime, residual node energy, and cost of updating the multicast tree. Cross-layer interaction allows the routing protocol to find stable, slow-changing multicast route trees.
Safari et al. [
83] propose the enhancement for the AODV routing protocol called cross-layer adaptive fuzzy-based ad hoc on-demand distance vector routing protocol (CLAF-AODV). The suggested method employs two-level fuzzy logic and a cross-layer design approach to select the appropriate nodes with a higher probability of participating in broadcasting by considering parameters from the three first layers of the open systems interconnection (OSI) model to achieve a quality of service, stability, and adaptability. It not only investigates the quality of the node and the network density around the node to make a decision but also investigates the path that the broadcast packet traveled to reach this node. The proposed protocol reduces the number of broadcast packets and significantly improves network performance with respect to throughput, packet loss, interference, and average energy consumption compared to the standard AODV and the fixed probability AODV (FP-AODV) routing protocols.
3.1.4. Transport Layer
The transport layer is responsible for the congestion control. The transport layer protocol estimates overload by measuring the interarrival time between delivery confirmation packets or by the lack of confirmation packets. When congestion is detected, the transport layer starts transmitting packets with a lower frequency. Congestion estimation takes into account only the line buffer overload. However, packets in ad hoc networks can be lost due to line noises and packet collisions. Then, the slowdown of packet transmission by the transport layer will not affect the probability of packet loss; the transport layer will slow down to the minimum speed, underutilizing the network capacity. To avoid this problem, cross-layer transport protocols use data from the channel and network layers to find out the cause of packet loss (
Table 8).
Yu et al. [
84] propose an adaptation of the TCP protocol to ad hoc networks. When delivery confirmation packets do not arrive, TCP believes that packets have been dropped due to line congestion rather than due to radio line noise or route disruption due to node movement, resulting in TCP slowing down the transmission speed by underutilizing the network’s free capacity. To avoid this, the channel layer reports the packet loss to TCP so that the packet is retransmitted from the TCP cache of adjacent nodes. This allows TCP to avoid congestion control and use full available line capacity.
Kliazovich et al. [
85] modify the TCP protocol to take into account the channel layer data: the delay and the capacity of the line. More data allows for more accurate congestion estimates than using only inter-arrival time of delivery confirmation packets.
Nahm et al. [
86] propose the TCP protocol modification to address the problem of network cascading overload. The cascading overload begins with the transport protocol overloading one of the route lines; the channel layer detects line overload and reports the routing protocol about line break; the routing protocol starts the procedure of finding a new route, overloading the network with route request messages. Therefore, the authors proposed a mechanism for controlling congestion at the transport layer that reduces the likelihood of network congestion due to the search for new routes. To achieve that, a new method of fractional increase in the TCP window size with explicit notification of the network layer that the route is not broken is proposed.
Chang et al. [
87] offer transport and network layer protocols that use network event information. Network events are route and connection failure, network packet reception errors, line buffer overflow at the channel layer, and long channel access time. The transport protocol behaves in the following manner. When a connection is severed, the transport protocol continues to transmit data packets and sends a request to the network layer to find a new route. If a line packet transmission fails, the transport protocol retransmits the packet without waiting for a delivery confirmation packet, which will never be sent. When the packet is discarded due to the line buffer overflow, the transport protocol triggers the congestion control mechanism. In the case of long channel access time, the transport protocol also triggers the congestion control mechanism. Each node uses an overload metric counter, which increases when a buffer overflow event or a long access channel time event occurs and decreases when the event disappears. Routing is performed using the hop metric and the overload metric.
Alhosainy et al. [
88] use multipath routing and optimize the network bandwidth by selecting, at the transport layer, total speeds for a set of routes and the distribution of speeds within a set of routes, depending on the packet collision probability. The optimization problem is presented as a dual decomposition problem.
Sharma et al. [
89] propose improvements to the MPTCP transport protocol. To avoid congestion control due to packet loss, the transport protocol uses a route delay variance estimate and an average number of line retransmissions in routes. As a result, the transport protocol can differentiate between packet drops due to line buffer overflows and packet corruption in radio lines.
3.1.5. Application Layer
Cross-layer methods concentrated on the application layer consist of overlay network methods and applications.
An overlay network is a collection of nodes and the services they provide (e.g., file sharing). An overlay network is implemented by applications. The overlay network has its own routing and neighbor discovery. However, the problem is that the overlay network topology and routes in it may not correspond to the physical network: neighboring nodes in the overlay network may be very far away from each other, and short routes in the overlay network may be very long in the underlying physical network. Therefore, cross-layer overlay networks use routing protocols to collect information about the overlay network. As a result, information about both the physical network and the overlay network is collected simultaneously, hence minimizing the amount of service information sent out by the overlay network (
Table 9).
Beylot et al. [
90] use network layer interaction for peer-to-peer application layer protocol “Gnutella” from conventional networks, as the peer discovery task is the same as the route discovery task. As a result, application layer service data are added to the network layer service data of the routing protocol. Routing protocol route discovery success is higher than using the peer node discovery protocol of the “Gnutella” application protocol.
In the works [
91,
93], interaction with the OLSR routing protocol is suggested without the specification of an overlay network protocol.
Delmastro et al. [
92] propose the adaptation of the “Pastry” protocol to ad hoc networks.
Boukerche et al. [
94] propose an adaptation of the peer-to-peer (P2P) network “Gnutella” to ad hoc networks by utilizing node location information and information from the network layer.
Kuo et al. [
95] propose a P2P overlay network that interacts with the network and physical layer. To know about the link disconnections with virtual nodes, the network layer sends route disconnection messages, and the physical layer sends signal-to-noise ratio values to the application layer. With a gradual decrease in the signal-to-noise ratio, one can assume that the virtual node of the P2P network is about to disconnect, and then P2P nodes update the virtual network topology. Furthermore, all P2P nodes can be connected to each other through routes with common nodes. The virtual network will look fully connected, but the routes will redundantly pass through the physical nodes of the virtual nodes. By using route information from the network layer, the problem of false full connectivity can be avoided.
There are not a lot of cross-layer applications that adapt to other layers because data from the application layer is usually treated as input data for other layers (
Table 10).
In transport ad hoc networks, a moving vehicle information dissemination application, depending on the rate of ad hoc network state change (information from the other four layers), which reflects the trajectories and vehicle distribution density, can disseminate information more or less frequently, thus avoiding network congestion and delivering only the most critical information.
Puthal et al. [
96] propose a congestion control mechanism for transport ad hoc networks, where, depending on the congestion level estimate, the application layer determines the number of messages to be sent and the size of the sliding window for congestion detection. The application layer also performs the task of the transport layer. The information from multiple layers about channel occupancy, queue occupancy, number of neighboring nodes, and transmission speed is used to estimate the congestion level.