An Efﬁcient Clustering Protocol for Cognitive Radio Sensor Networks

: Wireless sensor networks are considered an integral part of the Internet of Things, which is the focus of research centers and governments around the world. Clustering mechanisms and cognitive radio, in turn, are considered promising wireless network technologies for network management and spectral efﬁciency, respectively. In this paper, we consider the ﬂaws in the previously proposed network stability-aware clustering technique. In particular, we demonstrate that existing solutions do not operate properly based on the remaining energy and the quality of available common channels, even if their fusion is declared. In addition, security issues have not been sufﬁciently developed. We offer an approach to address these ﬂaws. To improve protocol efﬁciency, the problem of parameter tuning is discussed, and a performance analysis of the proposed solution is provided as well.


Introduction
Integration of the Internet of Things and wireless sensor networks is paving the way toward optimized production processes, improved operational efficiency in enterprises, and rationalization and delivery of high-quality service.The latest intelligent manufacturing technologies are being developed and implemented around the world.The growth rates in the respective applications have already exceeded the wildest expectations.It is estimated that about 10 billion devices used in the industrial sector are connected to the Internet [1].However, the rapid expansion of wireless technologies has brought many scientific and technical challenges for both academia and industry.An important and timely problem is the research and development of intelligent and efficient protocols that allow sensor networks to coexist with the existing wireless infrastructure while maintaining the performance required of Internet of Things (IoT) applications [2].
Technologies in cognitive radio (CR) are intended to solve the coexistence problem and improve the fault tolerance of wireless transmissions in a heavily congested environment.In a CR network, a node captures spectrum state information through interference measurements, and then searches for, and uses, the available spectrum resources so that different wireless devices can share the same frequency bands without causing problems for each other.There are two types of user in CR networks.Primary users (PUs) are the licensed users; secondary users (SUs) are allowed to share the licensed channels with PUs provided there is no harmful interference with the PUs.This approach resolves the tension between rapidly growing wireless traffic and spectrum scarcity [3].Thus, Cognitive Radio Sensor Networks (CRSNs) are capable of meeting the stringent quality of service (QoS) requirements demanded from various IoT applications [4].
To effectively manage communications in a wide variety of distributed wireless systems, a clustering technique is usually used [5].In accordance with the prescribed rules, neighboring network nodes are combined into groups called clusters.A cluster head is elected from among the cluster members.The cluster head is responsible for intra-cluster communications as well as inter-cluster communications.Clustering protocols are designed to improve the performance of network communications and ensure stable operation and scalability of the networks.Clustering is very important for CRSNs operating in a highly dynamic, unstable wireless environment due to PU activity.Besides monitoring the geographic proximity and the residual energy of the nodes, clustering protocols for CRSNs have to take into account the common licensed channels available to cluster members.This is called spectrum-aware clustering.To take advantage of clustering, we need to overcome a number of challenges due to dynamic changes in the available channels, the heterogeneous quality of heterogeneous channels, and so on.Thus, research and development of clustering protocols for CRSNs is an important and timely problem.The recent network stability-aware clustering (NSAC) protocol outperforms existing solutions.However, the failures of this protocol motivate us to investigate their causes and, thereby, improve the clustering protocols for CRSN.
The contributions of this paper are summarized as follows.

•
We point out critical flaws in the existing network stability-aware clustering technique.We point out that the previously used channel availability metrics are generally untenable.We develop the appropriate formalism to prove this;

•
We argue that the clustering procedure has to be revised;

•
We discuss how to fix the identified flaws.We offer alternative indicators for the selection of the cluster head.It is also suggested to limit the cluster size to a predetermined number.We present an analytical framework to calculate this number and examine the performance of our proposals.The performance analysis results are provided as well.
The rest of this paper is organized as follows: Section 2 introduces related works.Special attention is paid to a network stability-aware clustering protocol for CRSNs.A critical analysis of the existing NSAC technique is provided in Section 3. Section 4 presents an approach to cluster head selection and cluster formation along with the results of a numerical analysis to evaluate the performance of the proposed method.Finally, Section 5 concludes the paper.

Related Works
Clustering is a fairly common technique for cognitive radio networks.Several recent studies have reported that the proper use of this technique can essentially improve the performance of QoS support mechanisms.For example, a Bayesian method for nonparametric channel clustering, which determines the QoS levels supported over the available licensed channels, was proposed [6].The proposed method is based on an unsupervised clustering scheme and outperforms K-means and other baseline clustering algorithms.
A comprehensive survey of clustering methods for CRSNs was presented [5].The clustering methods discussed are mainly based on the number of available channels.In addition, the authors of this paper noted that there are not enough clustering investigations to satisfy the particular requirements of CRSNs, such as limited battery power in the sensor nodes and heterogeneous licensed channels.
In [7], the authors modified a basic distributed clustering protocol for wireless sensor networks, named Low Energy Adaptive Clustering Hierarchy (LEACH) [8], and delivered a spectrum-aware extension of the LEACH protocol, named CogLEACH.The modified protocol utilizes the number of free channels as a weight in the probability of each sensor node becoming a cluster head.It was shown that CogLEACH is more efficient than the LEACH protocol.However, the issues of network topology and channel quality were not properly addressed.
In [9], the spectrum-aware clustering approach is based on joint representation of the network topology and spectrum availability in undirected bipartite graphs.To obtain spectrum-aware clusters, the authors offered to solve the problem of constructing bi-cliques of maximum size from the bipartite graphs.The protocol requires heavy computation and ignores the residual energy of the network nodes.
The weighted clustering metric introduced in [10] includes temporal-spatial correlation, confidence level, and residual energy.The authors use a very firm assumption that the Euclidean distance between any two nodes in the network is known and does not change.Moreover, the channel state was ignored.
Recently, the NSAC protocol was offered [11].Unlike previous protocols, NSAC handles both power consumption and spectrum dynamics simultaneously.The enclosed simulation results demonstrate that NSAC essentially outperforms existent protocols.Let us consider NSAC in detail in the next subsection.

Network Stability-Aware Clustering Protocol
According to the system model [11], a set of licensed channels, C, is opportunistically available to the CRSN, and |C| = m.A cognitive sensor (CS) may use a licensed channel if it is not used by PUs.PU activity on the ith channel is considered a random process with busy and idle states.The probability that the ith licensed channel is available to SUs is denoted by p i .Correspondingly, the probability that this channel is used by PUs is 1 − p i .The following channel quality metric, Q i , is assigned to each channel i: where M i is the average idle duration, and ε is a user-defined parameter.
It is important to note that the authors of NSAC claimed the following instructions regarding the choice of ε: Channel quality metric (1) is used to calculate the weight of a network node in terms of spectrum availability.For this purpose, NSAC uses a graph-theoretic approach similar to the one in [12].CS k actualizes the sets of its neighboring nodes, N k , and available channels, C k (i.e., C k is a subset of C), and creates a bipartite graph (N k , C k , L k ), where C k and N k are independent sets of vertices, and L k is a set of edges.An edge, l L k , connects vertex v N N k to vertex v C C k if channel v C is available to CS v N .The weight of edge l is defined as the quality of the corresponding channel, v C .Next, the maximum edge biclique N * k , C * k , L * k is calculated.The weight of CS k in terms of spectrum availability is defined as follows: We illustrate the calculation of W k,C in Figure 1.The indexes of available channels for each CS are given in square brackets.
The total weight of CS k is defined as where in which µ is introduced as a balance factor between network stability and remaining energy, and E j designates the residual energy in CS j.
The CS with the largest total weight is marked as the cluster head.This node (e.g., CS k) and its neighbors, (N * k ), form a cluster and are excluded from further consideration.Other nodes update environmental information and repeat the process.Thus, a good candidate for the cluster head has enough energy and can use a set of lightly loaded channels.
in which  is introduced as a balance factor between network stability and remaining energy, and  designates the residual energy in CS j.
The CS with the largest total weight is marked as the cluster head.This node (e.g., CS k) and its neighbors, ( * ), form a cluster and are excluded from further consideration.Other nodes update environmental information and repeat the process.Thus, a good candidate for the cluster head has enough energy and can use a set of lightly loaded channels.

Critical Analysis of NSAC
In this section, we argue that the NSAC protocol fails to meet the declared goals.Therefore, the protocol as currently presented is impractical for deployment in CRSNs.

Channel Quality Metric
Let us assume that for some channel i it is observed that  >>  ,  ≈  , ∀ ∈ , and  .The residual energy is the same for all CSs.Obviously, CS i is the best choice for cluster head.However, if we take then  = − and  ≪  ∀ .If we increase the power (making  less), then the channel quality will get worse.In Figure 2, we plot the channel quality metric for the different values of  ,  = 1.
Note that the choice of small  (keeping the requirement  > 1) leads to an unlimited negative value of the channel quality metric for any fixed  and  ; i.e., This contradicts the NSAC authors' assertion regarding the choice of .To avoid an absurd situation, the following statement must be true: so, from (1), we get

Critical Analysis of NSAC
In this section, we argue that the NSAC protocol fails to meet the declared goals.Therefore, the protocol as currently presented is impractical for deployment in CRSNs.

Channel Quality Metric
Let us assume that for some channel i it is observed that M i >> M j , p i ≈ p j , ∀j ∈ C, and j = i.The residual energy is the same for all CSs.Obviously, CS i is the best choice for cluster head.However, if we take then If we increase the power (making ε less), then the channel quality will get worse.In Figure 2, we plot the channel quality metric for the different values of p i , M i = 1.
Note that the choice of small ε (keeping the requirement ε > 1) leads to an unlimited negative value of the channel quality metric for any fixed M i and p i ; i.e., lim This contradicts the NSAC authors' assertion regarding the choice of ε.To avoid an absurd situation, the following statement must be true: so, from (1), we get The parameter ε is universal for all CSs.The rules for applying the clustering protocol are the same throughout the system.Hence, ε has to be small enough.In fact, it is difficult to maintain inequality (8) due to the high dynamics of the spectrum available to the CRSN and the geographic distribution of its nodes.
The parameter  is universal for all CSs.The rules for applying the clustering protocol are the same throughout the system.Hence,  has to be small enough.In fact, it is difficult to maintain inequality (8) due to the high dynamics of the spectrum available to the CRSN and the geographic distribution of its nodes.Assume a group of adjacent nodes uses the set of channels  ,  ⊂ .These CSs exchange information directly, and define  such that (8) is true; i.e., If parameter  of another channel (i.e.,  ∈ \ ) has a uniform distribution with CDF where constant  <  , then condition ( 8) is violated for this channel with the following probability: Therefore, the probability of NSAC protocol failure is as follows: For example, if  = 5,  = 0.1, |\ | = 30, then NSAC fails with a probability of ≈ 0.97.
Next, we consider a big .Let us take channels ,  such that  =  ,  =  , 1 <  <  .It is easy to see that  <  .However, for any small  > 0 ∃ > 1: Assume a group of adjacent nodes uses the set of channels C 0 , C 0 ⊂ C.These CSs exchange information directly, and define ε such that (8) is true; i.e., min If parameter p i . of another channel (i.e., i ∈ C\C 0 ) has a uniform distribution with CDF x ≥ 1 (10) where constant a < ε −1 , then condition ( 8) is violated for this channel with the following probability: Therefore, the probability of NSAC protocol failure is as follows: 1 For example, if ε = 5, a = 0.1, |C\C 0 |= 30, then NSAC fails with a probability of ≈ 0.97.
Next, we consider a big ε.Let us take channels i, j such that However, for any small δ > 0 ∃ε > 1: Really, then the considered inequality is true.Thus, if ε is large enough, then p i becomes unimportant.This refutes the NSAC claims as well.
Parameter ε is intended to provide a preference between p i and M i .Let us provide the corresponding formalism.
Define set C p of channel pairs, as follows: We say that ε delivers the preference for p i on set C p (i.e., the probability of channel availability is more important than the average idle duration) if The provided definition is equivalent to the following condition: Therefore, we obtain which can be rewritten in compact form as: where Combining ( 8) and (19), we obtain a condition for the choice of ε in the situation when p i is more important 1 min Since γ > 0, and since p i > p j , the inequality for the choice of ε is always compatible.Similarly, to formalize the importance of M i , we define the following set: and obtain Combining ( 8) and (24), we obtain a condition for the choice of ε when M i is preferable: Thus, the initial single condition for the choice of ε is not sufficient for the correct functioning of the NSAC protocol.
In short,

1.
NSAC does not provide the proper choice of parameter ε for the channel quality metric (1); 2.
To use metric (1), NSAC needs to abandon some of the available licensed channels or implement a global mechanism for the dissemination of channel status information; 3.
In general, metric (1) does not automatically provide a tradeoff between the probability of channel availability and the average idle duration in the sense defined above.

CS Weight
In NSAC, the CS weight, W k , has the dimension of time, because the first term, W k,c , in Formula (3) has the dimension of time, and the second term, W k,e , is normalized and dimensionless.Therefore, the absolute value of W k is highly dependent on the unit of measurement for idle duration (seconds, milliseconds, years, etc.).In this situation, it is doubtful to get a balance between W k,e and W k,c .This is evidenced by the following fact.In simulation experiments in [1], the authors used balance factor µ as a normalization multiplier: µ is relatively small, 1 − µ ≈ 1.So, if W k,e gets an increment comparable to W k (if compatible), then the new total weight doubles, whereas if W k,c gets an increment of W k , the new weight increases very slightly.
Let us rewrite the residual energy metric in equivalent form: where Consider a simple case of homogeneous CSs, where each CS has the same residual energy, and all channels are isotropic, i.e., , and the weight of residual energy becomes Clearly, we also have Therefore, criterion (3) becomes Now, it is easy to see that the terms in sum (3) are not balanced.In Figure 3, we show µ W k,c and (1−µ)W k,e versus N * k .In this figure, we have taken C * k Q 1 = 10.By varying µ from 0.01 to 0.001, we get graphs of three linear functions reflecting the contribution of channel availability to the total weight (the blue lines).The red curve shows the corresponding contribution of residual energy.Here, we present one curve due to the negligible effect of parameter 1 − µ.
We can see from Figure 3 that a multiplier is very poor at normalizing factors behind the total weight.Even in a simple case, we cannot guarantee the proper choice for µ.It depends on the dynamics of the system.Thus, the total weight calculation has to be revised.

Clustering Procedure
In the NSAC protocol, in each iteration, one CS with the highest weight is selected as the cluster head.All its neighbors become members of this cluster, regardless of their number.The absence of cluster size control can potentially lead to unsatisfactory quality of service for the cluster members (a high loss rate for internal cluster packets, long packet latency, etc.), and irrational involvement of licensed channels.
Let us consider the CSRN in Figure 4. We assume that the residual energy is the same for each CS.There are three homogeneous licensed channels that CSs are allowed to use.Using the NSAC protocol, we get where q is the value of the channel quality metric.Additionally, Electronics 2021, 10, x FOR PEER REVIEW 8 of 14 depends on the dynamics of the system.Thus, the total weight calculation has to be revised.

Clustering Procedure
In the NSAC protocol, in each iteration, one CS with the highest weight is selected as the cluster head.All its neighbors become members of this cluster, regardless of their number.The absence of cluster size control can potentially lead to unsatisfactory quality of service for the cluster members (a high loss rate for internal cluster packets, long packet latency, etc.), and irrational involvement of licensed channels.
Let us consider the CSRN in Figure 4. We assume that the residual energy is the same for each CS.There are three homogeneous licensed channels that CSs are allowed to use.Using the NSAC protocol, we get where  is the value of the channel quality metric.Additionally, All nodes form one cluster with CS k as cluster head.In addition, although there are three licensed channels available, only one is used by all SUs, which essentially reduces the QoS.Packet forwarding latency becomes longer due to the increased contention among the CSs.
Furthermore, it should be noted that NSAC, similar to other network protocols with selection of an intermediate node based on self-promotion, is vulnerable to Denial of Service attacks such as Black/Grey Holes.This type of attack is very popular in wireless networks [13][14][15][16][17].In one scenario for this intrusion, a malicious node acts as a CS and broadcasts an enormous fake weight.As a result, the malicious node is selected as a cluster head for many nodes, and it then destroys the connections.An unlimited cluster size can potentially amplify the effect of this attack.

Proposition
On the basis of the conducted research, we can conclude that the NSAC protocol, in its present form, might fail to achieve its goals due to the identified critical flaws.In this section, we consider how to fix this.First, we suggest changing the metrics used to select the cluster header.Next, we modify the clustering procedure and calculate the cluster size.To substantiate our proposals, we use models based on continuous time Markov chains.Their use often requires simplifying assumptions.However, these models often provide a basis for adequate conclusions and valuable qualitative results.In particular, Markov process-based methods are generally used when the various performance metrics of wireless communication systems are calculated.

Metrics
First, to resolve the contradictions mentioned above, we propose revising metric (1).Let us consider PU activity on the channels in detail.A channel alternates between idle and busy.To describe the channel status, we use a continuous time, two-state Markov chain, as shown in Figure 5. Similar Markov chain models are usually used in the litera- All nodes form one cluster with CS k as cluster head.In addition, although there are three licensed channels available, only one is used by all SUs, which essentially reduces the QoS.Packet forwarding latency becomes longer due to the increased contention among the CSs.
Furthermore, it should be noted that NSAC, similar to other network protocols with selection of an intermediate node based on self-promotion, is vulnerable to Denial of Service attacks such as Black/Grey Holes.This type of attack is very popular in wireless networks [13][14][15][16][17].In one scenario for this intrusion, a malicious node acts as a CS and broadcasts an enormous fake weight.As a result, the malicious node is selected as a cluster head for many nodes, and it then destroys the connections.An unlimited cluster size can potentially amplify the effect of this attack.

Proposition
On the basis of the conducted research, we can conclude that the NSAC protocol, in its present form, might fail to achieve its goals due to the identified critical flaws.In this section, we consider how to fix this.First, we suggest changing the metrics used to select the cluster header.Next, we modify the clustering procedure and calculate the cluster size.To substantiate our proposals, we use models based on continuous time Markov chains.Their use often requires simplifying assumptions.However, these models often provide a basis for adequate conclusions and valuable qualitative results.In particular, Markov process-based methods are generally used when the various performance metrics of wireless communication systems are calculated.

Metrics
First, to resolve the contradictions mentioned above, we propose revising metric (1).Let us consider PU activity on the channels in detail.A channel alternates between idle and busy.To describe the channel status, we use a continuous time, two-state Markov chain, as shown in Figure 5. Similar Markov chain models are usually used in the literature to describe PU activity [9,18].

Proposition
On the basis of the conducted research, we can conclude that the NSAC protocol, in its present form, might fail to achieve its goals due to the identified critical flaws.In this section, we consider how to fix this.First, we suggest changing the metrics used to select the cluster header.Next, we modify the clustering procedure and calculate the cluster size.To substantiate our proposals, we use models based on continuous time Markov chains.Their use often requires simplifying assumptions.However, these models often provide a basis for adequate conclusions and valuable qualitative results.In particular, Markov process-based methods are generally used when the various performance metrics of wireless communication systems are calculated.

Metrics
First, to resolve the contradictions mentioned above, we propose revising metric (1).Let us consider PU activity on the channels in detail.A channel alternates between idle and busy.To describe the channel status, we use a continuous time, two-state Markov chain, as shown in Figure 5. Similar Markov chain models are usually used in the literature to describe PU activity [9,18].
This process spends an exponentially distributed amount of time with rate  in the idle state before making a transition to the busy state.Accordingly, the time until a PU releases the channel is an exponential random variable with rate .This is a well-known birth-death process described by a Kolmogorov equation [19].The vectors of steady-state probabilities are as follows: This process spends an exponentially distributed amount of time with rate λ in the idle state before making a transition to the busy state.Accordingly, the time until a PU releases the channel is an exponential random variable with rate µ.This is a well-known birth-death process described by a Kolmogorov equation [19].The vectors of steady-state probabilities are as follows: Therefore, for the notation introduced above, we get and Note that changing µ can make significant changes to factor p i , but it has absolutely no effect on M i .At the same time, both p i and M i are decreasing by the interval λ > 0, i.e., Moreover, there is a functional dependency between factors p i and M i : and the inverse dependency: Thus, from a practical point of view, there is no reason to consider p i and M i as competing entities.If µ is a constant, then M i becomes a monotonically increasing function of p i .Since factor p i is more informative, in the sense that it reflects changes in both λ and µ, we suggest using it as a metric of channel quality, i.e., Next, we adopt a channel quality metric in order to be consistent with the residual energy metric: Thus, Equation ( 3) is calculated taking into account (39) and (40).

Limited Cluster Size
Let us use queuing model M/M/1 to analyze the delay in CS packets.A similar approach was used in [20].Let the cluster size be n.Cluster members' packets form a Poisson process with rate nλ CS , where λ CS is the contribution of one CS.A cluster head serves packets, including its own.Service times of packets are assumed to be independent, exponentially distributed, random variables with mean 1/µ.The system's measures of effectiveness are well-known [21].The packet latency is as follows: Let t QoS be a strict packet delay constraint specified by a QoS policy, i.e., it is required that Assume that λ CS is large enough: In this case, inequality (42) is violated.In order to meet the QoS requirements, the number of cluster members can be reduced.Let us define a novel cluster size ( n) as follows: where σ is the fraction by which the cluster size is reduced.Therefore, the packet latency becomes In addition, let us consider the following relative difference: We may rewrite this as here, ρ is the utilization factor, i.e., it is the ratio of the rate at which cluster members generate packets to the rate at which the cluster head can handle this workload: As shown in Figure 6, QoS parameters can essentially be improved if the cluster size is reduced.Figure 6a shows the dependence of packet latency, T CS ( n, µ, λ CS ), on the fraction of the cluster reduction, σ, for different values of the initial cluster size, n.
Here λ CS = 1, µ = 50.The contribution of µ is shown in Figure 6b, where λ CS = 1, n = 20.Figure 6c illustrates packet latency versus σ for various values of λ CS .Here, n = 10, µ = 33.In Figure 6d, we plot the relative difference, ∆, as a function of the utilization factor, ρ, and σ.Depending on the network topology, the number of neighboring nodes may be less than  * .From this point of view, Formula (50) determines the maximal cluster size that guarantees QoS.It is a step function, an example of which is shown in Figure 7, where  = 2,  = 21.At the same time, the number of clusters is positively correlated with the cluster size.The system overhead increases with the number of clusters.It makes sense to use the smallest appropriate number of clusters [22].Therefore, the optimal cluster size is defined as follows: Using the floor function, we obtain Depending on the network topology, the number of neighboring nodes may be less than n * .From this point of view, Formula (50) determines the maximal cluster size that guarantees QoS.It is a step function, an example of which is shown in Figure 7, where λ CS = 2, µ = 21.
(c) (d) Depending on the network topology, the number of neighboring nodes may be less than  * .From this point of view, Formula (50) determines the maximal cluster size that guarantees QoS.It is a step function, an example of which is shown in Figure 7, where  = 2,  = 21.Thus, unlike the NSAC protocol, we limit the cluster formation process to n * cluster members.This allows for the required packet latency.
Please note that our proposition reduces the re-clustering frequency as well.To address this point, we use an approach offered in [23].Let us consider a network with a topology described by complete graph K n .All n sensors share the same licensed channels.The NSAC protocol creates one cluster of size n.The cluster head lifetime is as follows: where E B is the charge capacity of a CS battery, and e p is the energy consumption for transmitting one packet.In accordance with our proposition, the cluster size is defined by Formula (50), and the cluster head lifetime becomes For example, if λ CS = 2, µ = 11, t QoS = 1, then n * = 5.Let E B = 6 J, e p = 0.01 J.The performance comparison of our approach and NSAC is shown in Figure 8.
Finally, note that the limited cluster size increases the passive resistance of networks to intrusions such as Black/Grey Holes.
transmitting one packet.In accordance with our proposition, the cluster size is defined by Formula (50), and the cluster head lifetime becomes For example, if  = 2,  = 11,  = 1, then  * = 5.Let  = 6 J,  = 0.01 J.The performance comparison of our approach and NSAC is shown in Figure 8. Finally, note that the limited cluster size increases the passive resistance of networks to intrusions such as Black/Grey Holes.

Conclusions
Different from traditional clustering protocols, the recently proposed NSAC protocol considers both energy consumption and spectrum dynamics.It has been shown that that the proposed NSAC protocol clearly outperforms existing methods in the aspects of network stability and energy consumption.[11].However, as we argued above, overall, NSAC fails to properly work.We proposed changing the channel quality metric and the method for calculating the weight of a CS.When we use the appropriate mathematical tools, we find there is no reason to consider the average idle duration and the probability of channel availability as competing entities.Therefore, we suggest using only the probability of channel availability as a metric of channel quality since this factor is more informative.The overall impact of the various channel states indicators on clustering protocol performance and reliability will be studied in future work.

Conclusions
Different from traditional clustering protocols, the recently proposed NSAC protocol considers both energy consumption and spectrum dynamics.It has been shown that that the proposed NSAC protocol clearly outperforms existing methods in the aspects of network stability and energy consumption.[11].However, as we argued above, overall, NSAC fails to properly work.We proposed changing the channel quality metric and the method for calculating the weight of a CS.When we use the appropriate mathematical tools, we find there is no reason to consider the average idle duration and the probability of channel availability as competing entities.Therefore, we suggest using only the probability of channel availability as a metric of channel quality since this factor is more informative.The overall impact of the various channel states indicators on clustering protocol performance and reliability will be studied in future work.

Figure 1 .
Figure 1.An example of calculations for the spectrum-aware weight of cognitive sensor (CS) k: (a) a four-node fragment of the Cognitive Radio Sensor Network (CRSN) topology; (b) the bipartite graph for CS k; and (c) the maximum edge biclique and weight of CS k in terms of spectrum availability.

Figure 1 .
Figure 1.An example of calculations for the spectrum-aware weight of cognitive sensor (CS) k: (a) a four-node fragment of the Cognitive Radio Sensor Network (CRSN) topology; (b) the bipartite graph for CS k; and (c) the maximum edge biclique and weight of CS k in terms of spectrum availability.

Figure 3 .
Figure 3.The behavior of total weight components.

Figure 3 .
Figure 3.The behavior of total weight components.Electronics 2021, 10, x FOR PEER REVIEW 9 of 14

Figure 4 .
Figure 4.An example of irrational involvement of licensed channels.

Figure 4 .
Figure 4.An example of irrational involvement of licensed channels.

Figure 4 .
Figure 4.An example of irrational involvement of licensed channels.

Figure 5 .
Figure 5. Markov model for a licensed channel's status.

Figure 5 .
Figure 5. Markov model for a licensed channel's status.

Figure 6 .
Figure 6.Packet latency versus cluster size reduction: (a) the impact of the initial cluster size; (b) the impact of cluster head performance; (c) the impact of traffic intensity from one cluster node; and (d) a 3D plot of the relative difference.

Figure 6 .
Figure 6.Packet latency versus cluster size reduction: (a) the impact of the initial cluster size; (b) the impact of cluster head performance; (c) the impact of traffic intensity from one cluster node; and (d) a 3D plot of the relative difference.

Figure 6 .
Figure 6.Packet latency versus cluster size reduction: (a) the impact of the initial cluster size; (b) the impact of cluster head performance; (c) the impact of traffic intensity from one cluster node; and (d) a 3D plot of the relative difference.

Figure 7 .
Figure 7.The cluster size's allowed limit versus the packet latency defined by the quality of service (QoS) policy.

Figure 7 .
Figure 7.The cluster size's allowed limit versus the packet latency defined by the quality of service (QoS) policy.

Figure 8 .
Figure 8.The performance comparison of the proposed method and network stability-aware clustering (NSAC).

Figure 8 .
Figure 8.The performance comparison of the proposed method and network stability-aware clustering (NSAC).