Next Article in Journal
Sensor Fusion for Recognition of Activities of Daily Living
Next Article in Special Issue
Fuzzy Logic-Based Geographic Routing Protocol for Dynamic Wireless Sensor Networks
Previous Article in Journal
Significant Sensitivity Improvement for Camera-Based Lateral Flow Immunoassay Readers
Previous Article in Special Issue
Diffusion Logarithm-Correntropy Algorithm for Parameter Estimation in Non-Stationary Environments over Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

On Consensus-Based Distributed Blind Calibration of Sensor Networks

by
Miloš S. Stanković
1,2,3,*,
Srdjan S. Stanković
2,4,
Karl Henrik Johansson
5,
Marko Beko
6,7 and
Luis M. Camarinha-Matos
7,8
1
Innovation Center, School of Electrical Engineering, University of Belgrade, 11120 Belgrade, Serbia
2
Vlatacom Institute, 11070 Belgrade, Serbia
3
School of Technical Sciences, Singidunum University, 11000 Belgrade, Serbia
4
School of Electrical Engineering, University of Belgrade, 11120 Belgrade, Serbia
5
ACCESS Linnaeus Center, School of Electrical Engineering, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden
6
COPELABS, Universidade Lusófona de Humanidades e Tecnologias, Campo Grande 376, 1749-024 Lisboa, Portugal
7
CTS/UNINOVA , Monte de Caparica, 2829-516 Caparica, Portugal
8
Faculty of Sciences and Technology, NOVA University of Lisbon, 2825-149 Caparica, Portugal
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(11), 4027; https://doi.org/10.3390/s18114027
Submission received: 25 September 2018 / Revised: 29 October 2018 / Accepted: 5 November 2018 / Published: 19 November 2018
(This article belongs to the Special Issue Signal and Information Processing in Wireless Sensor Networks)

Abstract

:
This paper deals with recently proposed algorithms for real-time distributed blind macro-calibration of sensor networks based on consensus (synchronization). The algorithms are completely decentralized and do not require a fusion center. The goal is to consolidate all of the existing results on the subject, present them in a unified way, and provide additional important analysis of theoretical and practical issues that one can encounter when designing and applying the methodology. We first present the basic algorithm which estimates local calibration parameters by enforcing asymptotic consensus, in the mean-square sense and with probability one (w.p.1), on calibrated sensor gains and calibrated sensor offsets. For the more realistic case in which additive measurement noise, communication dropouts and additive communication noise are present, two algorithm modifications are discussed: one that uses a simple compensation term, and a more robust one based on an instrumental variable. The modified algorithms also achieve asymptotic agreement for calibrated sensor gains and offsets, in the mean-square sense and w.p.1. The convergence rate can be determined in terms of an upper bound on the mean-square error. The case when the communications between nodes is completely asynchronous, which is of substantial importance for real-world applications, is also presented. Suggestions for design of a priori adjustable weights are given. We also present the results for the case in which the underlying sensor network has a subset of (precalibrated) reference sensors with fixed calibration parameters. Wide applicability and efficacy of these algorithms are illustrated on several simulation examples. Finally, important open questions and future research directions are discussed.

1. Introduction

Recently emerged technologies dealing with networked systems, such as the Internet of Things (IoT), Networked Cyber-Physical Systems (CPS), and Sensor Networks (SN), still have many conceptual and practical challenges intriguing to both researchers and practitioners [1,2,3,4,5,6,7,8,9]. New classes of problems in this area continuously arise, driven by many new real-world applications. Particularly in the case of SNs, application examples include environment monitoring, wildfires detection, shop-floor manufacturing, smart cities, etc. One of the most important challenges, limiting the performance, robustness and time-to-market of these new technologies, is sensor calibration. Micro-calibration can be performed only in relatively small SNs where every sensor is individually calibrated in a controlled environment. Typical SNs are of large scale, functioning in dynamic and partially unobservable environments, thus demanding new methods and algorithms for efficient calibration. The idea of macro-calibration is to calibrate the entire SN based on the total system response, so that there is no requirement to individually calibrate every sensor node. The typical approach is to formulate the calibration problem as a parameter estimation problem (e.g., [10,11]). Of significant interest are methods for automatic calibration of SNs which successfully perform even if there are no reference signals/sensors, or other sources of groundthruth information about the measured process. In these situations, the goal of the calibration is to achieve homogeneous behavior of all the nodes, possibly enforcing dominant influence of sensors that are a priori known to provide sufficently good (calibrated) measurements. These types of calibration problems are known as the blind calibration problems (e.g., [12]). Furthermore, in many applications of SNs, it is of essential importance that the network functions in a completely decentralized fashion, preforming calibration in real-time, without the requirement for any kind of centralized information fusion. Hence, completely distributed and decentralized real-time calibration algorithms are of paramount importance.
In this paper, we study recently proposed algorithms which possess all the mentioned desirable properties: they deal with blind macro-calibration of SNs based on completely decentralized, real-time and recursive estimation of the parameters of linear calibration functions [13,14,15,16,17]. Another advantageous property of these algorithms is that it is assumed that the underlying SN have directed communication links between neighboring nodes. A basic algorithm is developed by using a distributed optimization problem setup, constructing a distributed gradient recursive scheme, with the local objectives formulated as weighted sums of mean-square differences between the corrected sensor readings of neighboring nodes. A direct consequence of this problem setup is that the algorithm can be studied as a generalized consensus scheme, to which the existing convergence results of standard consensus schemes are not applicable (e.g., [18]). However, by using techniques based on the stability of diagonally quasi-dominant dynamical systems [13,19,20,21] it is possible to prove asymptotic convergence of calibrated sensor outputs to consensus, in the mean-square sense and with probability one (w.p.1). The basic algorithm can be extended by assuming the presence of several factors which are of essential importance for practical applicability of the proposed method: (1) additive communication noise, (2) communication dropouts, (3) additive measurement noise, and (4) asynchronous communication.
Two possible modifications of the basic algorithm are presented for solving the problems posed in the cases (1)–(3) [16,17]. The first is based on the assumption that the noise variance is known a priori, which is used to design an appropriate compensation term [17]. The second modification is more robust and is based on an instrumental variable usage [16]. In both cases, the attainment of the asymptotic consensus in the mean-square sense and w.p.1 is guaranteed. In the case of completely asynchronous communication scenario, which is particularly important, we show how the algorithm can be implemented assuming a broadcast gossip communication scheme, which does not require clock synchronization among the agents, or any type of centralized information or coordination [14].
Another practically important situation arises when there are multiple nodes in the network that do not update (correct) their calibration parameters, but they still participate in the described distributed macro-calibration process. In this case, these nodes are called reference nodes since their only role is to provide reference information based on which other nodes should calibrate themselves. For example, this situation may arise in practice when a set of uncalibrated sensors is added to an already calibrated SN. In the case of more than one reference node, the corrected gains and offsets of the non-reference nodes, in general, do not converge to consensus, but to different points which depend on the information dictated by the reference sensors and the network properties [14]. In the case of only one reference sensor, the corrected gains and offsets of the rest of the sensors converge to the same point imposed by the reference sensor.
Finally, an analysis is given which clarifies the influence of initially selected weights corresponding to particular nodes in the presented calibration parameters estimation recursions. Guidelines are formulated on how these weights should be chosen so that given requirements are satisfied. General discussion of the described results is provided from both theoretical and practical points of view, based on which several future research directions are proposed.
The outline of the rest of the paper is as follows. The following section briefly discusses related work. In Section 3 we introduce the distributed blind macro-calibration problem and derive the basic algorithm for the noiseless case. Section 4 is devoted to the presentation of the convergence properties of the base algorithm. In Section 5 certain assumptions about the measured signals, communication errors, and communication protocol are relaxed, and the appropriate algorithm modifications are introduced, together with their convergence properties. In Section 6 a discussion on the convergence rate, the case of presence of reference sensors with fixed characteristics, and some design guidelines are presented. In Section 7 we present illustrative simulation results. Finally, Section 8 presents some conclusions and future research directions.

2. Related Work

Macro-calibration is based on the idea of calibrating the whole SN based on the responses of all the nodes. The most frequent approaches to this problem are based on parameter estimation techniques (e.g., [11]). If controlled stimuli are not available the problem is usually referred to as blind calibration of SNs. In general, it is a difficult problem, which has certain similarities with more general problems of blind estimation, equalization, and deconvolution (e.g., [22,23,24] and references therein).
Most of the proposed appraches to blind calibration in the existing literature are centralized and non-recursive [12,25,26,27,28,29,30,31,32,33,34,35,36,37]. Within this class of methods, in refs. [12,25] a blind calibration algorithm based on signal subspace projection was analyzed assuming restrictive signal and sensor properties. In ref. [26] the method was improved from the point of view of robustness to subspace uncertainties. In ref. [27] the authors proposed to use sparsity and convex optimization for blind estimation of calibration gains. In ref. [28] an approach to blind sensor calibration is adopted based on centralized consistency maximization at the network level assuming very dense deployment and only pairwise inter-node communications. In ref. [29] a moments-based centralized blind calibration is proposed for mobile SNs, exploiting multiple measurements of the same signal of mobile nodes, assuming that the measured signal does not change in time. In ref. [30], the authors proposed a method which can manage situations in which density requirements are not met. Interesting centralized approaches to blind drift calibration proposed in refs. [31,32,33], which also work when the density requirement is not met, are based on non-restrictive modeling of the assumed underlying signal subspace, with drift estimation using Kalman filter [31], sparse Bayesian learning [32], or deep learning [33]. The approach in ref. [34] also does not rely on stringent assumptions about signal subspace, but assume first-order auto-regressive signal process model. The authors of [35] introduce linear algebraic model of calibration relationships in a SN with centralized architecture to improve the simple mean calibration scheme, assuming sufficiently dense deployment. Another centralized approach to mobile sensors calibration is proposed in ref. [36] and is based on using a nonnegative matrix factorization. Some of the density assumptions introduced in this work were relaxed in ref. [37]. In ref. [38] the blind calibration problem was treated in a context of sparse sensing, using a message passing algorithm, assuming constant measured signal. The method proposed in ref. [39], based on geospatial estimation and Kalman filter, works if the sensors are calibrated at the beginning of the operation after deployment, and then may start to drift.
The problem of distributed blind macro-calibration may have certain similarities with the clock synchronization approaches based on local data processing and communications with neighbors [40,41,42,43,44,45,46,47]. However, these approaches cannot be directly mapped to the calibration problem treated in this paper.
Finally, certain extended consensus algorithms have been applied to SN calibration problems, but in different settings than the one treated in this paper [48,49,50,51]. An approach to blind calibration of sensor gains only, based on distributed gossip-based Expectation-Maximization iterations was proposed in ref. [52], assuming that the measured signal is constant. Another distributed approach was proposed in ref. [53], which explicitly uses a state-space model of the underlying process, and a message exchange protocol for offset compensation. The proposed scheme was formulated without proof of convergence. This paper is focused on the algorithms proposed recently in refs. [13,14,15,16,17] representing completely distributed and decentralized blind macro-calibration algorithms with rigorous proofs of convergences for both corrected sensor gains and offsets, with satisfactory performance under diverse deteriorating conditions which may typically appear in practical applications.

3. Problem Definition and the Basic Algorithm

Assume that the SN to be calibrated consists of n nodes/sensors. In the base setup, it is assumed that each sensor is measuring the same signal x ( t ) in discrete-time instants t = , 1 , 0 , 1 , ; this signal can be considered as a realization of a stochastic process { x ( t ) } . Note that we have implicitly assumed that the sensor nodes are functioning synchronously, since all the sensor nodes perform measurements in the same time instances t. We will relax this assumption in Section 5.3. The output (measurement) of the i-th node can be written as
y i ( t ) = α i x ( t ) + β i ,
where α i is the unknown gain, and β i the unknown offset of sensor i. Note that, in this problem setup, it is assumed that α i and β i are unknown constants and not the random variables.
Calibration of a sensor is performed by applying an affine calibration function to the raw readings (1) which results in the following calibrated sensor output
z i ( t ) = a i y i ( t ) + b i = g i x ( t ) + f i ,
where a i and b i are the calibration parameters to be obtained, g i = a i α i is the corrected gain and f i = a i β i + b i the corrected offset. The calibration objective is, ideally, to find parameters a i and b i for which g i is equal to one and f i equal to zero. In general, if we assume that there are no sensors which give perfect readings z i ( t ) = x ( t ) and that the signal x ( t ) is unknown and cannot be obtained or measured by some other means, this objective is impossible to achieve. Hence, in our decentralized real-time blind macro-calibration problem setup, this ideal objective must be alleviated: we require that the calibration process asymptotically achieves equal calibrated outputs z i ( t ) for all the nodes i = 1 , , n . To approach as close as possible to the ideal goal of achieving g i = 1 and f i = 0 , we could use certain a priori knowledge about the underlying SN, and try to adjust the algorithm, such that, loosely speaking, the “good” sensors (e.g., precalibrated or higher-quality sensors) correct, using the consensus strategy, the response of the rest of the sensors. For example, if, in a given SN, there is an apriori given perfectly calibrated reference sensor, the ideal asymptotic calibration ( g i = 1 and f i = 0 ) of the rest of the sensor nodes will be achieved if the consensus goal is achieved.
It is assumed that the underlying SN have a predefined communication topology, defining possible inter-sensor communications, represented by a directed graph G = ( N , E ) , where N is the set of nodes (sensors) and E the set of communication links (arcs). Define the adjacency matrix A = [ a i j ] , i , j = 1 , , n , where a i j = 1 if the j-th node is able to send messages to the i-th sensor, and a i j = 0 otherwise. Let N i be the set of in-neighboring nodes (or just neighbors) of the i-th node, i.e., the nodes j for which a i j = 1 . Similarly, let N i out be the set of out-neighboring nodes of the i-th node, i.e., the nodes j for which a j i = 1 .
Let us now derive the basic calibration algorithm. The idea is to start with local criteria for each node, whose local minimization would lead to a network-level consensus on the corrected sensor outputs:
J i = j N i γ i j E { ( z j ( t ) z i ( t ) ) 2 } ,
i = 1 , , n , where γ i j are nonnegative scalar weights whose influence on the properties of the algorithm will be discussed later. Denoting θ i = [ a i b i ] T , the following expression is obtained for the gradient of (3):
grad θ i J i = j N i γ i j E ( z j ( t ) z i ( t ) ) y i ( t ) 1 .
From (4) we obtain the following stochastic gradient recursion for estimating θ i minimizing (3):
θ ^ i ( t + 1 ) = θ ^ i ( t ) + δ i ( t ) j N i γ i j ϵ i j ( t ) y i ( t ) 1 ,
where θ ^ i ( t ) = [ a ^ i ( t ) b ^ i ( t ) ] T , ϵ i j ( t ) = z ^ j ( t ) z ^ i ( t ) , z ^ i ( t ) = a ^ i ( t ) y i ( t ) + b ^ i ( t ) , and δ i ( t ) > 0 is a time-varying gain whose influence on the convergence properties of the algorithm will be discussed later. The initial conditions are assumed to be θ ^ i ( 0 ) = [ 1 0 ] T , i = 1 , , n . We expect that the set of recursions (5) asymptotically achieve that all the local estimates of corrected gains g ^ i ( t ) = a ^ i ( t ) α i and corrected offsets f ^ i ( t ) = a ^ i ( t ) β i + b ^ i ( t ) converge to the same values g ¯ and f ¯ , respectively; this implies that the corrected sensor outputs of all the nodes are also equal z ^ j ( t ) = z ^ i ( t ) , i , j = 1 , , n .
In Figure 1 an illustrating smart-city example sensor network is depicted. Completely decentralized network architecture is assumed, i.e., the nodes communicate according to the directed communication graph which is represented in the figure using arcs. The communication graph will typically depend on the mutual node distances, transmission power of individual nodes, channel conditions, presence of obstacles, etc. Each node in the network is equipped with the same type of sensor which measures certain physical quantity (e.g., certain atmospheric condition or air quality indicator). At each time instant t, a node i performs local reading of the raw sensor output y i ( t ) , calculation of the corrected sensor output z ^ i ( t ) according to (2) using current local estimates of the calibration parameters a ^ i ( t ) and b ^ i ( t ) , transmission of the corrected value z ^ i ( t ) to the out-neighbors N i out , reception of the values z ^ j ( t ) from the in-neighbors j N i , and calculation of the updated estimates of the local calibration parameters a ^ i ( t + 1 ) and b ^ i ( t + 1 ) using (5). In the initial presentation we will assume that, at each iteration of the algorithm (5), local sensor measurement y i ( t ) and the current messages of the neighboring nodes’ corrected outputs z ^ j ( t ) are available at node i. Possible communication dropouts and/or faulty/noisy sensor readings will be treated later. Local computational cost for each agent is minor since only two parameters are being estimated. Communication complexity depends on the number of neighboring agents, which is small in typical SNs with decentralized architecture.
For the sake of compact notations, suitable for convergence analysis of the derived algorithm, let us introduce
ϕ ^ i ( t ) = g ^ i ( t ) f ^ i ( t ) = α i 0 β i 1 θ ^ i ( t ) ,
and
ϵ i j ( t ) = x ( t ) 1 ( ϕ ^ j ( t ) ϕ ^ i ( t ) ) ,
so that (5) becomes
ϕ ^ i ( t + 1 ) = ϕ ^ i ( t ) + δ i ( t ) j N i γ i j Ω i ( t ) ( ϕ ^ j ( t ) ϕ ^ i ( t ) ) ,
where
Ω i ( t ) = α i y i ( t ) x ( t ) α i y i ( t ) β i y i ( t ) ] x ( t ) 1 + β i y i ( t ) = α i β i x ( t ) + α i 2 x ( t ) 2 α i β i + α i 2 x ( t ) ( 1 + β i 2 ) x ( t ) + α i β i x ( t ) 2 1 + β i 2 + α i β i x ( t ) ,
with the initial conditions ρ ^ i ( 0 ) = [ α i β i ] T , i = 1 , , n . Therefore, the following compact form for the recursions (8) is obtained
ϕ ^ ( t + 1 ) = [ I + ( Δ ( t ) I 2 ) B ( t ) ] ϕ ^ ( t ) ,
where ⊗ is the Kronecker product, I is the identity matrix of dimension 2 n , I 2 is the dimension 2 identity matrix, ϕ ^ ( t ) = [ ϕ ^ 1 ( t ) T ϕ ^ n ( t ) T ] T ,   Δ ( t ) = diag { δ 1 ( t ) , , δ n ( t ) } , diag { } denotes the corresponding block diagonal matrix,
B ( t ) = Ω ( t ) ( Γ I 2 ) ,
Ω ( t ) = diag { Ω 1 ( t ) , , Ω n ( t ) } ,
Γ = j , j 1 γ 1 j γ 12 γ 1 n γ 21 j , j 2 γ 2 j γ 2 n γ n 1 γ n 2 j , j n γ n j ,
where γ i j = 0 when j N i , and the initial condition is ϕ ^ ( 0 ) = [ ϕ ^ 1 ( 0 ) T ϕ ^ n ( 0 ) T ] T , according to (8). From the way in which we have constructed the vector ϕ ^ ( t ) we conclude that the asymptotic value of ϕ ^ ( t ) should be such that all of its odd components are equal, and all of its even components are equal.
In the next section, it will be shown that, under certain general assumptions, for any choice of the weights γ i j 0 for j N i (and γ i j = 0 when j N i ) the algorithm achieves convergence to consensus. However, if the underlying calibration objective is to achieve absolute calibration of the sensors (i.e., g ¯ close to one and f ¯ close to zero), this can be done by trying to exploit sensors that are a priori known to have good characteristics. In a large SN, this can be achieved in two ways: (1) if the large number of sensors are “good” sensors, then γ i j -s in all neighborhoods N i should be approximately the same; or (2) if there is a set of a priori chosen good sensors j N f N the goal is to enforce their dominant influence to the rest of the nodes. There are two possibilities to achieve this: (a) to set high values of γ i j for all j N f and i N j out ; or (b) to set small values of γ j k for all j N f , k N j , k j (which prevents large changes of ϕ ^ j ( t ) ). Section 6.3 deals with the guidelines on weights tuning, while Section 6.4 treats the case in which a set of reference sensors is kept with fixed calibration parameters.

4. Convergence Analysis

In this section we discuss the convergence properties of the calibration scheme presented in the previous section, where it has been assumed that both local sensor measurements and inter-node communications are perfect, i.e., possible communication errors and/or measurement errors are not present. We first analyze this basic scheme in order to focus on structural characteristics of the algorithm; the case of lossy SNs will be treated in the subsequent sections. In the basic setup, without presence of any unreliability, it is sufficient to assume that the step sizes δ i ( t ) are constant:
(A1) δ i ( t ) = δ = const , for all i = 1 , , n .
For clearer initial presentation of the convergence results, we now adopt a simplifying assumption:
(A2) { x ( t ) } is independent and identically distributed (i.i.d.) sequence, with E { x ( t ) } = x ¯ < and E { x ( t ) 2 } = s 2 < .
In practice, when the SNs are used to measure certain physical quantities, the assumption that { x ( t ) } is i.i.d. is almost never satisfied; hence it will be relaxed later.
Based on (A1) and (A2), the expectation of the parameter estimates ϕ ¯ ( t ) = E { ϕ ( t ) } satisfies the following recursion
ϕ ¯ ( t + 1 ) = ( I + δ B ¯ ) ϕ ¯ ( t ) ,
where ϕ ¯ ( 0 ) = ϕ ( 0 ) , B ¯ = Ω ¯ ( Γ I 2 ) and Ω ¯ = E { Ω ( t ) } = diag { Ω ¯ 1 Ω ¯ n } , with
Ω ¯ i = α i β i x ¯ + α i 2 s 2 α i β i + α i 2 x ¯ ( 1 + β i 2 ) x ¯ + α i β i s 2 1 + β i 2 + α i β i x ¯ .
The following assumption, typical for consensus-based algorithms, is introduced:
(A3) Graph G has a spanning tree.
It implies that the matrix Γ has one zero eigenvalue and the rest eigenvalues with negative real parts, e.g., [54]. Hence, from the structure of matrix B ¯ , we directly conclude that it has at least two zero eigenvalues. Its remaining eigenvalues can be characterized starting from the following assumption:
(A4) s 2 x ¯ 2 = var { x ( t ) } > 0 .
This assumption guarantees that the estimation recursions are sufficiently excited by the signal x ( t ) . Its important consequence is that Ω ¯ i defined by (12) is Hurwitz, for all i = 1 , , n . Indeed, using some simple algebra it can be derived that Ω ¯ i is Hurwitz if and only if (iff)
α i 2 ( s 2 x ¯ 2 ) > 0 , 2 α i β i x ¯ + α i 2 s 2 + 1 + β i 2 > 0 .
Both inequalities hold iff (A4) holds. This greatly simplifies further derivations which depend on somewhat complicate expression (12) for the 2 × 2 diagonal blocks of the matrix Ω ¯ .
Because of the block structure of matrices Ω ¯ and B ¯ , the properties of the main recursion (11) cannot be analyzed using standard linear consensus methodologies (see, e.g., [18,54] and references therein). To cope with this problem, a methodology based on the concept of diagonal quasi-dominance of matrices decomposed into blocks has been used [13,17,19,20,21] to obtain the following important result characterizing all the eigenvalues of the matrix B ¯ .
Lemma 1
([13,17]). Assume that the assumptions (A3) and (A4) hold. Then, matrix B ¯ in (11) has two zero eigenvalues and the rest eigenvalues have negative real parts.
Observe that vectors i 1 = 1 0 1 0 1 0 T R 2 n and i 2 = 0 1 0 1 0 1 T R 2 n , where R is the set of real numbers, are the right eigenvectors of B ¯ corresponding to the eigenvalue at the origin. Let ρ 1 and ρ 2 be the corresponding normalized left eigenvectors, satisfying ρ 1 ρ 2 i 1 i 2 = I 2 . The following lemma deals with a similarity transformation important for all the remaining derivations throughout the paper.
Lemma 2
([13,17]). Let T = i 1 i 2 T 2 n × ( 2 n 2 ) , where T 2 n × ( 2 n 2 ) is an 2 n × ( 2 n 2 ) matrix, such that span { T 2 n × ( 2 n 2 ) } = span { B ¯ } ( span { A } denotes a linear space spanned by the columns of matrix A). Then, T is nonsingular and
T 1 B ¯ T = 0 2 × 2 0 2 × ( 2 n 2 ) 0 ( 2 n 2 ) × 2 B ¯ ,
where B ¯ is Hurwitz, and 0 i × j denotes a i × j zero matrix.
Notice that
T 1 = ρ 1 ρ 2 S ( 2 n 2 ) × 2 n ,
where S ( 2 n 2 ) × 2 n can be determined from the definition of T.
From the structure of the matrices in (11), it can be concluded that the transformation T from Lemma 2, when applied to the original matrix B ( t ) , will produce a matrix which has the same structure as the transformed matrix given in Equation (14).
Lemma 3
([13,17]). For the matrix B ( t ) in (10) it holds that, for all t,
T 1 B ( t ) T = 0 2 × 2 0 2 × ( 2 n 2 ) 0 ( 2 n 2 ) × 2 B ( t ) ,
where B ( t ) is an ( 2 n 2 ) × ( 2 n 2 ) matrix and T is given in Lemma 2.
The following convergence theorem can now be formulated.
Theorem 1
([17]). Assume that Assumptions (A1)–(A4) hold. Then there exists δ > 0 such that for all δ δ in (10)
lim t ϕ ^ ( t ) = ( i 1 ρ 1 + i 2 ρ 2 ) ϕ ^ ( 0 )
in the mean square sense and w.p.1.
Note here that the limit vector in (17) ( i 1 ρ 1 + i 2 ρ 2 ) ϕ ^ ( 0 ) have all the odd elements equal, and all the even elements equal, which means that the corrected gains of all the nodes converge to the same value, and the corrected offsets of all the nodes converge to the same value. It can be shown [13] that this value only depends on the unknown sensor parameters α i and β i , and the weights γ i j in J i , i , j = 1 , n . For given initial conditions in (5), ρ 1 ϕ ^ ( 0 ) and ρ 2 ϕ ^ ( 0 ) are in the form of weighted sums of α i and β i , 1 , , n , respectively. Assuming that the weights γ i j are the same for all the nodes, and that α i have a distribution centered around one, and β i around zero, these weighted sums will be close to one and zero, respectively.
The value of δ > 0 in Theorem 1, which ensures convergence, may be restrictive. In practice, the choice of step size δ in (A1) should be based on the actual properties of the underlying SN; its value needs to be small enough to achieve convergence, but it should also be sufficiently large to achieve acceptable rate of convergence (as in the standard parameter estimation recursions [55]).
After clarifying the main structural properties of the algorithm, we now treat the more realistic case of correlated sequences { x ( t ) } . We replace (A2) with:
(A2’) The random process { x ( t ) } is weakly stationary, bounded w.p.1, and with bounded first and second moments, i.e., | x ( t ) | K < , E { x ( t ) } = x ¯ < , E { x ( t d ) x ( t ) } = m ( d ) < for all d { 0 , 1 , 2 , } ( E { · } is a sign of the mathematical expectation), m ( 0 ) = s 2 < . It also holds that
( a ) | E { x ( t ) | F t τ } x ¯ | = o ( 1 ) , ( w . p . 1 )
( b ) | E { x ( t d ) x ( t ) | F t τ } m ( d ) | = o ( 1 ) , ( w . p . 1 )
when τ , for all d { 0 , 1 , 2 , } , τ > d ( F t τ denotes the minimal σ -algebra generated by { x ( 0 ) , x ( 1 ) , , x ( t τ ) } , and o ( 1 ) denotes a function that converges to zero when τ ).
Hence, (A2’) requires stationarity, boundedness, and imposes a mixing condition on the signal { x ( t ) } . The explicitly used time shift parameter d will be used later for introducing a new algorithm based on an instrumental variable, capable of dealing with possible measurement noise.
The following theorem examines the convergence of the algorithm (11) under assumption (A2’):
Theorem 2 
([16]). Assume that the assumptions (A1), (A2’), (A3) and (A4) hold. Then there exists δ > 0 such that for all δ δ in (10) lim t ϕ ^ ( t ) = ( i 1 ρ 1 + i 2 ρ 2 ) ϕ ^ ( 0 ) in the mean square sense and w.p.1.

5. Extensions of the Basic Algorithm

In this section, we introduce several modifications and generalizations of the basic algorithm (5), so that it is possible to achieve distributed calibration under more challenging conditions, typically present in real-life SNs: communication dropouts, additive communication noise, measurement noise, and asynchronous communication. Convergence properties of the introduced modifications are presented in detail.

5.1. Communication Errors

In this subsection, we assume that inter-node communication errors can be manifested in two ways: (1) communication dropouts (outages) and (2) additive communication noise. Communication dropouts typically occur in SNs using digital communication; additive noise can, in this case, model quantization effects. For example, in the case of smart city sensor networks, depicted in Figure 1, the dropouts will happen relatively often because of the dynamic environment, where both physical obstacles and electronic interference can be persistent. In certain, less frequent practical situations, SNs can use analog communication (e.g., when certain types of energy harvesting are used [56]), when additive communication noise is dominant, and dropouts appear less frequently.
The communication errors are formally introduced using the following assumptions:
(A5) The weights γ i j in the algorithm (5) are now randomly time-varying, according to stochastic processes given by { γ i j ( t ) } = { u i j ( t ) γ i j } , where { u i j ( t ) } are i.i.d. binary random sequences, such that u i j ( t ) = 1 with probability p i j ( p i j > 0 when j N i ), and u i j ( t ) = 0 with probability 1 p i j .
(A6) Instead of receiving z ^ j ( t ) from the j-th node, the i-th node receives z ^ j ( t ) + ξ i j ( t ) , where { ξ i j ( t ) } is an i.i.d. random sequence with E { ξ i j ( t ) } = 0 and E { ξ i j ( t ) 2 } = ( σ i j ξ ) 2 < .
(A7) Processes { x ( t ) } , { u i j ( t ) } and { ξ i j ( t ) } are mutually independent.
Based on the above assumptions, the communication dropout at any iteration t, when node j is sending to node i, will happen with probability 1 p i j , independently of the additive communication noise process { ξ i j ( t ) } and the measured signal { x ( t ) } .
Denoting
ν i ( t ) = j N i γ i j ( t ) ξ i j ( t ) α i y i ( t ) 1 + β i y i ( t ) ,
and ν ( t ) = ν 1 ( t ) ν n ( t ) , one obtains from (10) that
ϕ ^ ( t + 1 ) = [ I + ( Δ ( t ) I 2 ) B ( t ) ] ϕ ^ ( t ) + Δ ( t ) ν ( t ) ,
where B ( t ) = Ω ( t ) ( Γ ( t ) I 2 ) , and Γ ( t ) is obtained from Γ by applying (A5).
Convergence properties of the recursion (20), under the additional assumptions (A5)–(A7), can be derived starting from the results of the previous subsection. Due to the mutual independence of the random variables in B ( t ) , it can be concluded that E { B ( t ) } = B ¯ = Ω ¯ ( Γ ¯ I 2 ) , where Γ ¯ = E { Γ ( t ) } is the same as Γ but with γ i j replaced by γ i j p i j . Also, it follows that B ˜ ( t ) B ( t ) B ¯ , is a martingale difference sequence (since E { B ˜ ( t ) | F t 1 } = 0 ). Furthermore, it can be concluded that B ¯ = Ω ¯ ( Γ ¯ I 2 ) has the same spectrum as B ¯ in (11): it has two zero eigenvalues and the rest eigenvalues are with negative real part.
Since the additive noise is now present in the recursions (20), (A1) needs to be replaced with the following assumption, typical in the stochastic approximation literature (e.g., [57]):
(A1’) δ i ( t ) = δ ( t ) > 0 , t = 0 δ ( t ) = , t = 0 δ ( t ) 2 < , i = 1 , , n .
Intuitively, (A1’) introduces diminishing gains δ i ( t ) which converge to zero slowly enough, so that the additive noise can be averaged out while asymptotic convergence to a consensus point is achieved (despite the presence of noise).
Therefore, we have
ϕ ^ ( t + 1 ) = ( I + δ ( t ) B ¯ ) ϕ ^ ( t ) + δ ( t ) B ˜ ( t ) ϕ ^ ( t ) + δ ( t ) ν ( t ) .
Similarly as in the noiseless case, let as introduce the similarity transformation
T = i 1 i 2 T 2 n × ( 2 n 2 ) ,
where T 2 n × ( 2 n 2 ) is an 2 n × ( 2 n 2 ) matrix, such that span { T 2 n × ( 2 n 2 ) } = span { B ¯ } . Then, ( T ) 1 = ρ 1 ρ 2 S ( 2 n 2 ) × 2 n , where ρ 1 and ρ 2 are the left eigenvectors of B ¯ corresponding to the eigenvalue at the origin. By applying transformation T to (21), and using stochastic Lyapunov stability arguments, along with the arguments typically used in analyzing stochastic approximation algorithms [13,17,58,59], the following theorem can be proved:
Theorem 3
([13,17]). Let Assumptions (A1’), (A2)–(A7) be satisfied. Then, ϕ ^ ( t ) generated by (21) converges to i 1 w 1 + i 2 w 2 in the mean square sense and w.p.1, where w 1 and w 2 are scalar random variables satisfying E { w 1 } = ρ 1 ϕ ^ ( 0 ) and E { w 2 } = ρ 2 ϕ ^ ( 0 ) .
The theorem essentially states that, again, all the corrected drifts converge to the same point, and all the corrected offsets converge to the same point; however, because of the additive communication noise, these points are random and depend on the noise realization. The mean values of these possible convergence points depend on the sensor parameters α i and β i , the design parameters γ i j , as well as on the dropout probabilities p i j , i , j = 1 , n .

5.2. Measurement Noise

In this subsection we, in addition to communication errors, assume that the signal x ( t ) is measured with additive measurement noise. This situation is of essential importance for practical applications since practically all the existing sensors contain certain measurement errors which are typically modeled using stochastic processes [3].
Formally, we model the additive noise stochastic process using the following assumption:
(A8) Instead of y i ( t ) given by (1), the sensor measurements are now contaminated by noise, and given by
y i η ( t ) = α i x ( t ) + β i + η i ( t ) ,
where { η i ( t ) } , i = 1 , n , are zero mean i.i.d. random sequences with E { η i ( t ) 2 } = ( σ i η ) 2 , independent of the measured signal x ( t ) .
By replacing y i η ( t ) instead of y i ( t ) in the base algorithm (5), one obtains the following “noisy” version of (8):
ϕ ^ i ( t + 1 ) = ϕ ^ i ( t ) + δ i ( t ) j N i γ i j { [ Ω i ( t ) + Ψ i ( t ) ] [ ϕ ^ j ( t ) ϕ ^ i ( t ) ] + N i j ( t ) ϕ ^ j ( t ) N i i ( t ) ϕ ^ i ( t ) } ,
where Ψ i ( t ) = η i ( t ) α i x ( t ) α i β i x ( t ) β i , N i j ( t ) = η j ( t ) α j α i y i ( t ) 0 β i y i ( t ) 0 + η j ( t ) η i ( t ) α j 0 0 0 and N i i ( t ) = η i ( t ) α i α i y i ( t ) 0 β i y i ( t ) 0 + η i ( t ) 2 α i 0 0 0 , assuming α i 0 , i = 1 , , n . It is important to observe here that E { Ψ i ( t ) } = 0 , E { N i j ( t ) } = 0 ; however E { N i i ( t ) } = ( σ i η ) 2 α i 0 0 0 .
Assuming again that the step sizes δ i ( t ) , i = 1 , , n , satisfy (A1’), one can obtain the following equation analog to (10):
ϕ ^ ( t + 1 ) = ( I + δ ( t ) { [ Ω ( t ) + Ψ ( t ) ] ( Γ I 2 ) + N ˜ ( t ) } ) ϕ ^ ( t ) ,
where Ψ ( t ) = diag { Ψ 1 ( t ) , , Ψ n ( t ) } and N ˜ ( t ) = [ N ˜ i j ( t ) ] with N ˜ i j ( t ) = k , k i γ i k N i i ( t ) for i = j and N ˜ i j ( t ) = γ i j N i j ( t ) for i j , i , j = 1 , , n .
In an analogous way as in the previous section, instead of (11), the following equation is obtained for the mean of the corrected calibration parameters
ϕ ¯ ( t + 1 ) = [ I + δ ( t ) ( B ¯ + Σ η ) ] ϕ ¯ ( t ) ,
where B ¯ is as in (11) and Σ η = diag { ( σ 1 η ) 2 α 1 j γ 1 j , 0 , , ( σ n η ) 2 α n j γ n j , 0 } . Because of the additional term Σ η , the sums of the rows of the matrix B ¯ + Σ η are not equal to zero anymore, so that the convergence to consensus (as in Theorem 1) cannot be achieved in this case.
However, it can be seen from the structure of the recursion (24) that, if we assume that the measurement noise variances ( σ i η ) 2 are a priori known, we can use them to modify the basic algorithm (5) in the following way, ensuring again the asymptotic convergence to consensus:
θ ^ i ( t + 1 ) = θ ^ i ( t ) + δ ( t ) { j N i γ i j ϵ i j η ( t ) y i η ( t ) 1 + ( σ i η ) 2 j N i γ i j 0 0 0 θ ^ i ( t ) } ,
where ϵ i j η ( t ) = z ^ j η ( t ) z ^ i η ( t ) and z ^ i η ( t ) = a ^ i ( t ) y i η ( t ) + b ^ i ( t ) , i = 1 , , n .
The following theorem deals with the convergence of the above modification of the basic algorithm, when the measurement noise is present together with the communication errors. The convergence points will again depend on the measurement and communication noise realizations, in a similar way as in Theorem 3.
Theorem 4
([17]). Assume that the assumptions (A1’), (A2)–(A8) hold. Then, ϕ ^ ( t ) , given by (25), converges to i 1 w 1 + i 2 w 2 in the mean square sense and w.p.1, where w 1 and w 2 are scalar random variables satisfying E { w 1 } = ρ 1 ϕ ^ ( 0 ) and E { w 2 } = ρ 2 ϕ ^ ( 0 ) .
Notice that the above theorem was based on assumption (A2): indeed, when both { x ( t ) } and { η i ( t ) } are i.i.d. sequences, it is not surprising that the asymptotic consensus is achievable only provided σ i η , i = 1 , , n , are known. However, we can replace the unrealistic assumption (A2) with (A2’) (introduced in Section 4 in the noiseless case) allowing correlated sequences { x ( t ) } which is almost always the case in practice. In such a way, the correlatedness problem present in the algorithm (24) can be overcame, without requiring any a priori information about the measurement noise process. The idea is to introduce instrumental variables in the basic algorithm in the way analogous to the one often used in the field system identification, e.g., [60,61]. Instrumental variables have the basic property of being correlated with the measured signal, and uncorrelated with noise. If { ζ i ( t ) } is the instrumental variable sequence of the i-th agent, one has to ensure that ζ i ( t ) is correlated with x ( t ) and uncorrelated with η j ( t ) , j = 1 , , n . Under A2’) a logical choice is to take the delayed sample of the measured signal as an instrumental variable, i.e., to take ζ i ( t ) = y i η ( t d ) , where d 1 . Consequently, we present the following general calibration algorithm based on instrumental variables able to cope with measurement noise:
θ ^ i ( t + 1 ) = θ ^ i ( t ) + δ ( t ) j N i γ i j ϵ i j η ( t ) y i η ( t d ) 1 ,
where d 1 and ϵ i j η ( t ) = z ^ j η ( t ) z ^ i η ( t ) , z ^ i η ( t ) = a ^ i ( t ) y i η ( t ) + b ^ i ( t ) , i = 1 , , n . Following the derivations from Section 3, one obtains from (26) the following relations involving explicitly x ( t ) and the noise terms:
ϕ ^ i ( t + 1 ) = ϕ ^ i ( t ) + δ ( t ) j N i γ i j { ( Ω i ( t , d ) + Ψ i ( t , d ) ) ( ϕ ^ j ( t ) ϕ ^ i ( t ) ) + N i j ( t , d ) ϕ ^ j ( t ) N i i ( t , d ) ϕ ^ i ( t ) } ,
where
Ω i ( t , d ) = α i β i x ( t ) + α i 2 x ( t ) x ( t d ) α i β i + α i 2 x ( t d ) ( 1 + β i 2 ) x ( t ) + α i β i x ( t ) x ( t d ) 1 + β i 2 + α i β i x ( t d ) ,
Ψ i ( t , d ) = η i ( t d ) α i x ( t ) α i β i x ( t ) β i ,
N i j ( t , d ) = η j ( t ) α j α i y i ( t d ) 0 β i y i ( t d ) 0 + η j ( t ) η i ( t d ) α j 0 0 0
and
N i i ( t , d ) = η i ( t ) α i α i y i ( t d ) 0 β i y i ( t d ) 0 + η i ( t ) η i ( t d ) α i 0 0 0 .
In the same way as in (23), we have
ϕ ^ ( t + 1 ) = ( I + δ ( t ) { [ Ω ( t , d ) + Ψ ( t , d ) ] ( Γ I 2 ) + N ˜ ( t , d ) } ) ϕ ^ ( t ) ,
where Ω ( t , d ) = diag { Ω 1 ( t , d ) , , Ω n ( t , d ) } , Ψ ( t , d ) = diag { Ψ 1 ( t , d ) , , Ψ n ( t , d ) } , N ˜ ( t , d ) = [ N ˜ i j ( t , d ) ] , where N ˜ i j ( t , d ) = k , k i γ i k N i i ( t , d ) for i = j and N ˜ i j ( t , d ) = γ i j N i j ( t , d ) for i j , i , j = 1 , , n .
To formulate a convergence theorem for (28), the following modification of (A4) is needed:
(A4’) m ( d ) > x ¯ 2 for some d = d 0 1 .
This assumption implies that the correlation m ( d 0 ) should be large enough. Similarly as in the case of (A4), it can be concluded that (A4’) implies that Ω ¯ ( d ) = E { Ω i ( t , d ) } is Hurwitz. Similarly as in the above cases, let as introduce the similarity transformation
T = i 1 i 2 T 2 n × ( 2 n 2 ) ,
where T 2 n × ( 2 n 2 ) is an 2 n × ( 2 n 2 ) matrix, such that span { T 2 n × ( 2 n 2 ) } = span { B ¯ ( d ) } . Then, ( T ) 1 = ρ 1 ρ 2 S ( 2 n 2 ) × 2 n , where ρ 1 and ρ 2 are the left eigenvectors of B ¯ ( d ) = E { Ω ( t , d ) ( Γ ( t ) I 2 ) } = Ω ¯ ( d ) ( Γ ¯ I 2 ) corresponding to the zero eigenvalue. The following theorem deals with the convergence of the instrumental variable algorithm (26). The convergence point, again, depends on the noise realization.
Theorem 5
([16]). Assume that the assumptions (A1’), (A2’), (A3), (A4’), (A5)–(A8) hold. Then ϕ ^ ( t ) , given by (28) with d = d 0 , converges to i 1 w 1 + i 2 w 2 in the mean square sense and w.p.1, where w 1 and w 2 are scalar random variables satisfying E { w 1 } = ρ 1 ϕ ^ ( 0 ) and E { w 2 } = ρ 2 ϕ ^ ( 0 ) .

5.3. Asynchronous Broadcast Gossip Communication

So far we have shown how to deal with most of the practical challenges which emerge when dealing with real life SNs, such as communication dropouts, communication additive noise and measurement noise. However, in all of the above discussed algorithms we have implicitly assumed that all the nodes in the network share a common clock, based on which the recursions in (5), (25) or (26) can be implemented synchronously. Indeed, when introducing the basic algorithm we have assumed that the signal x ( t ) is being measured in discrete-time instances t by all the nodes. These instances are also used as time indexes of synchronous recursions of the above algorithms. Yet, there are many practical cases of SNs for which it is impossible or impractical to function synchronously. A typical example is the case when the nodes follow certain sleeping policies in order to minimize power consumption (e.g., [3]). For example, the nodes in SN shown in Figure 1, measuring air pollution or atmospheric conditions, may be programmed to make measurements less often during periods in which there is less traffic in the city. These types of situations are rigorously treated in the rest of this subsection.
Instead of the problem setup introduced in Section 3, assume now that the sensors are measuring a continuous-time signal x ( t ) at discrete points t k , t k R + , k = 1 , 2 , , t k + 1 > t k , producing the sensor outputs
y i ( t k ) = α i x ( t k ) + β i + η i ( t k ) ,
where the α i and β i are the same unknown parameters as in the previous subsections, and we also assume that the measurement noise η i ( t k ) , i = 1 , , n , is present in the sensor readings.
Furthermore, since the goal is to remove dependence on a common global clock, it is now assumed that every node j N has its own local clock. For the sake of compact notation and simpler derivations, a single clock, called global virtual clock, is introduced, which ticks when any of the local clocks ticks. Hence, t k in (29) can be considered as the time in which the k-th tick of the virtual clock happend. To have a well defined situation, it is formally assumed that the ticks of the local clocks are independent, and that the intervals between any two consecutive ticks are finite w.p.1. It is also assumed, for the sake of simpler derivations, that the unconditional probability that the j-th clock ticked at an instance t k is q j > 0 , independently of k. It is easy to verify that these conditions are satisfied for a typical model used in SNs, where it is assumed that the local clocks tick according to independent Poisson processes with rates μ j (as in, e.g., [62,63]). This case will be adopted throughout this subsection. It directly follows that, in this case, the virtual global clock ticks according to a Poisson process with the rate j = 1 n μ j .
According to the above assumptions, let us denote with t l j the ticks of the local clock j, l = 1 , 2 , . The communication protocol can then be defined in the following way. At each local clock tick, a node j makes the local sensor measurement, calculates the corrected sensor output z j ( t l j ) (based on the current estimates of calibration parameters a j and b j ), and broadcasts it to its out-neighbors i N j out . We assume also that communication dropouts can happen, i.e., each node i N j out receives the transmitted message with probability p i j > 0 . For the sake of clarity of presentation, we do not treat additive communication noise in this subsection. It is also assumed that the communication delay is negligible, so that, practically at the same time instant all the nodes which have received the broadcast, perform the local sensor reading, calculate their corrected outputs z i ( t l j ) , and update the local estimates of their calibration parameters a i and b i . This procedure is repeated for any local clock tick. The index of the node whose clock has ticked at instant t k is denoted by j ( k ) , and let J ( k ) be the subset of the out-neighbors i N j ( k ) out which have received the broadcast message. Also, let x ( k ) = x ( t k ) = x ( t l j ( k ) ) , y i ( k ) = y i ( t k ) = y i ( t l j ( k ) ) , y j ( k ) = y j ( t k ) = y j ( k ) ( t l j ( k ) ) , z i ( k ) = z i ( t k ) = z i ( t l j ( k ) ) , z j ( k ) = z j ( t k ) = z j ( k ) ( t l j ( k ) ) , η i ( k ) = η i ( t k ) = η i ( t l j ( k ) ) and η j ( k ) = η j ( t k ) = η j ( k ) ( t l j ( k ) ) for some l.
The measurement noise is treated as in the previous subsection, by using the delayed measurement y i ( d i ( k ) ) as the instrumental variable
ζ i ( k ) = y i ( d i ( k ) ) ,
where d i ( k ) is the global iteration number that corresponds to the closest past measurement of the node i. By using the same local criteria as in (3) and gradients as in (4), the following new recursion for updating the calibration parameters at node i is formulated:
θ ^ i ( k ) = θ ^ i ( k 1 ) + δ i ( k ) γ i , j ( k ) ϵ i , j ( k ) ( k ) y i ( d i ( k ) ) 1 ,
where:
  • θ ^ i ( k ) = [ a ^ i ( k ) b ^ i ( k ) ] T ,
  • δ i ( k ) is the step size given by δ i ( k ) = ν i ( k ) c , where ν i ( k ) = m = 1 k I { i J ( m ) } is the number of parameter updates of node i up to the iteration k, with 1 / 2 < c 1 ( I { · } denotes the indicator function),
  • ϵ i , j ( k ) ( k ) = z ^ j ( k ) ( k ) z ^ i ( k ) , where
    z ^ j ( k ) ( k ) = a ^ j ( k ) ( k 1 ) y j ( k ) ( k ) + b ^ j ( k ) ( k 1 ) ,
    z ^ i ( k ) = a ^ i ( k 1 ) y i ( k ) + b ^ i ( k 1 )
    are the corrected outputs of node j ( k ) and node i.
The initial conditions are adopted to be θ ^ i ( 0 ) = [ 1 0 ] T . Note that, according to the problem setup, at a given iteration k only the nodes i J ( k ) perform the above parameters update; for the rest of the nodes it holds that θ ^ i ( k ) = θ ^ i ( k 1 ) .
Computationally, the algorithm is as simple as the basic one, requiring only a few additions and multiplications in one iteration. Information needed at node i are: the local sensor measurement, the local instrumental variable, and the current output sent by an in-neighbor j. Knowledge of the global iteration index k (or d i ( k ) ) is not needed.
From the above definition of the step size δ i ( k ) it can be concluded that it depends only on the number of local clock ticks, which makes the algorithm completely decentralized.
It should also be noticed that the instrumental variables in (31) can be selected in several ways. For example, instead of choosing (30), it can be practical to choose ζ i ( k ) = y i ( t ¯ l j , i ) , where t ¯ l j , i is the time instant of a supplementary measurement of node i, just after the last step of the recursion (31) has been locally performed. This scheme is not assumed in the sequel, because of much more complicated notation; all the results can be easily transferred to this case.
Similarly as in the synchronous case, we introduce:
ϕ ^ i ( k ) = g ^ i ( k ) f ^ i ( k ) = α i 0 β i 1 θ ^ i ( k ) ,
and
ϵ i , j ( k ) ( k ) = c x ( k ) 1 ( ϕ ^ j ( k ) ( k ) ϕ ^ i ( k ) ) + a ^ j ( k ) ( k ) η j ( k ) ( k ) a ^ i ( k ) η i ( k ) .
Consequently, we have
ϕ ^ i ( k ) = ϕ ^ i ( k 1 ) + δ i ( k ) γ i , j ( k ) { ( Ω i ( k ) + Ψ i ( k ) ) ( ϕ ^ j ( k ) ( k 1 ) ϕ ^ i ( k 1 ) ) + N i , j ( k ) ( k ) ϕ ^ j ( k ) ( k 1 ) N i i ( k ) ϕ ^ i ( k 1 ) } ,
where
Ω i ( k ) = α i β i x ( k ) + α i 2 x ( k ) x ( d i ( k ) ) α i β i + α i 2 x ( d i ( k ) ) ( 1 + β i 2 ) x ( k ) + α i β i x ( k ) x ( d i ( k ) ) 1 + β i 2 + α i β i x ( d i ( k ) )
Ψ i ( k ) = η i ( d i ( k ) ) α i x ( k ) α i β i x ( k ) β i ,
N i , j ( k ) ( k ) = η j ( k ) ( k ) α j ( k ) α i y i 0 ( d i ( k ) ) 0 β i y i 0 ( d i ( k ) ) 0 + η j ( k ) ( k ) η i ( d i ( k ) ) α j ( k ) 0 0 0
and
N i i ( k ) = η i ( k ) α i α i y i 0 ( d i ( k ) ) 0 β i y i 0 ( d i ( k ) ) 0 + η i ( k ) η i ( d i ( k ) ) α i 0 0 0 ,
where y i 0 ( k ) = α i x ( k ) + β i , with the initial conditions ϕ ^ i ( 0 ) = [ α i β i ] T , i J ( k ) .
Recursions (36) for i = 1 , , n , can be written compactly as
ϕ ^ ( k ) = { I + [ Ω ( k ) + Ψ ( k ) ] ( Δ ( k ) Γ ( k ) I 2 ) + ( Δ ( k ) I 2 ) N ˜ ( k ) } ϕ ^ ( k 1 ) ,
where:
  • ϕ ^ ( k ) = [ ϕ ^ 1 ( k ) T ϕ ^ n ( k ) T ] T ,
  • Δ ( k ) = diag { δ 1 ( k ) , , δ n ( k ) } ,
  • Ω ( k ) = diag { Ω 1 ( k ) , , Ω n ( k ) } ,
  • Γ ( k ) = [ Γ ( k ) l m ] , with Γ ( k ) l l = γ l , j ( k ) and Γ ( k ) l , j ( k ) = γ l , j ( k ) for all l J ( k ) , Γ ( k ) l m = 0 , otherwise,
  • Ψ ( k ) = diag { Ψ 1 ( k ) , , Ψ n ( k ) } ,
  • N ˜ ( k ) = [ N ˜ l m ( k ) ] , where N ˜ l l ( k ) = γ l , j ( k ) N l l ( k ) and N ˜ l , j ( k ) ( k ) = γ l , j ( k ) N l , j ( k ) ( k ) , for all l J ( k ) , N ˜ ( k ) l m = 0 , otherwise.
The initial condition is ϕ ^ ( 0 ) = [ ϕ ^ 1 ( 0 ) T ϕ ^ n ( 0 ) T ] T = [ [ α 1 β 1 ] T [ α n β n ] T ] T .
Since we have formulated a slightly different problem setup than in Section 3, we introduce a new set of assumptions, and denote them using letter B:
(B1) { x ( k ) } is a stationary random sequence, bounded w.p.1, and satisfying the ϕ-mixing condition.
(B2) Let { t i , l } , l = 1 , 2 , represent time instants in which node i performs measurements. Then, min i r ¯ i > m 2 , where m = E { x ( k ) } and r ¯ i = E { x ( t i , l ) x ( t i , l 1 ) } , i = 1 , , n .
(B3) Graph G has a spanning tree.
(B4) { η i ( k ) } , i = 1 , n , are zero-mean sequences of independent and bounded w.p.1 random variables. { η i ( k ) } is independent of the process { x ( t ) } , with E { η i ( k ) 2 } = ( σ i η ) 2 for all k.
Assumptions (B3) and (B4) are essentially the same as (A3) and (A8).
The ϕ-mixing condition (B1) represents one of the strong mixing conditions, usually satisfied for sensory signals [64,65,66].
Assumption (B2) represents an extension of the assumption (A4), adapted to the presence of the instrumental variable y i ( d i ( k ) ) in (31). It guarantees the persistence of excitation in the sense that the variance of x ( k ) must be greater than zero (for all k, because of stationarity) so that constant signals are not allowed [13,55]. However, it also ensures sufficent correlation between the instrumental variable and the current measurement, so that e.g., white noise signals are also not allowed. It can be easily derived [14] that (B2) is satisfied if the autocovariance function of x ( t ) is positive in a sufficiently large interval around zero. Also, if the rates μ j are adjustable we can choose μ min = min j N μ j large enough, such that (B2) is always satisfied. Therefore, (B2) is, in general, not restrictive for processes having dominant low frequency spectrum, which is typical in practical applications of SNs.
Based on the above modified problem definition, the following result was proved in ref. [14], stating that both corrected gains and corrected offsets will converge to consensus points (which depend on the realizations of the stochastic processes) for all the nodes.
Theorem 6
([14]). Let Assumptions (B1)–(B4) be satisfied. Then ϕ ^ ( k ) given by (37) converges to ϕ ^ = χ 1 i 1 + χ 2 i 2 in the mean square sense and w.p.1, where χ 1 and χ 2 are random variables with bounded second moments.

6. Discussion

6.1. Rate of Convergence

In the above subsections we did not discuss on how quickly the presented algorithms can achieve convergence to the specified points. Since the basic Equation (5) have constant step size, it can be concluded that the asymptotic convergence rate in the noiseless case is exponential. In the cases of measurement and additive communication noise, the convergence rate of the algorithms can be obtained following general methodology applied to the analysis of standard stochastic approximation algorithms. The following result gives an upper bound on the mean-square error with respect to the consensus point:
Theorem 7
([17]). Under the assumptions of any of the Theorems 3, 4 or 5, together with lim t ( δ ( t + 1 ) 1 δ ( t ) 1 ) = d 0 , there exists such a positive number σ < 1 that for all 0 < σ < σ the asymptotic consensus is achieved by the presented algorithms with the convergence rate o ( δ ( t ) σ ) .
It might be problematic to obtain the precise value of σ in concrete applications. However, it can be shown that it directly depends on the sensor and network properties (encoded by matrix B ( t ) or B ( t , d ) ) and on the connectivity of the underlying communication graph [67]. On one hand, if the number of nodes is increased without increasing network connectivity, the rate of convergence will decrease; however, if the graph connectivity is increased, the convergence rate will also increase. For example, if the graph is fully connected, the convergence rate will be high at the expense of very large number of communication links. In practice, a compromise between the rate of convergence and the network complexity needs to be found.
Another compromise to be found is between algorithm’s noise immunity and convergence rate. Indeed, according to [68], assuming that δ ( t ) is given as δ ( t ) = m 1 / ( m 2 + t μ ) , m 1 , m 2 > 0 , 1 2 < μ 1 , the values of μ closer to 1 2 give larger rate of convergence but higher sensitivity to noise; the values of μ closer to 1 give the opposite effect.

6.2. Stationarity of the Measured Signal

In the previous section, we made comments about all the introduced assumptions, explaining their practical applicability. Let us make some additional comments on the stationarity assumption for the random process { x ( t ) } , introduced in (A2), (A2’) and (B1). From the point of view of applications, it cannot be considered restrictive, since it encompasses a large variety of quickly and slowly varying real signals. This assumption is not essential for proving convergence of the presented algorithms: it has been introduced primarily for the sake of focusing on the essential structural aspects of the algorithm and avoiding complex notation [13,14,17]. Notice, according to Lemmas 2 and 3, that the similarity transformation T can be applied even in the case of time varying matrix B ¯ ( t ) , owing to its specific structure; namely, we have T 1 B ¯ ( t ) T = 0 2 × 2 0 2 × ( 2 n 2 ) 0 ( 2 n 2 ) × 2 B ¯ ( t ) , where B ¯ ( t ) is Hurwitz and T is obtained from B ¯ ( t ) for any selected t = t . Moreover, notice that the conclusions of Theorem 1 hold, in general, provided the following unrestrictive condition holds: lim t τ ( I B ¯ ( t τ ) ) = 0 . Also, it is possible to conclude directly that the results of the above theorems hold for changes of B ¯ ( t ) sufficiently slow. Moreover, it is not difficult to prove that the above convergence results exactly hold when the signal is asymptotically stationary.

6.3. Network Weights Design

As already discussed in the previous subsections, the implicit goal of the presented calibration scheme is to exploit the sensors with a priori good calibration properties by enforcing their dominating effect in the final consensus value to which all the nodes converge. This can be done in two ways, by adjusting the design weights γ i j : (1) if the majority of sensors are “good”, we can set all γ i j for the neighborhood of any node i to the same value; or (2) if there is a smaller subset N f N of a priori “good” sensors in the network we should appropriately tune the values of γ i j . For this scenario, in this subsection, we give a more detailed analysis of the weights adjusting problem for the case of asynchronous communications treated in Section 5.3.
According to the theoretical results presented in detail in ref. [14], the dominant component of the random variables [ χ 1 χ 2 ] in Theorem 6 is given by a weighted sum of the unknown sensor parameters α i and β i . The positive weights are determined by the left eigenvectors w 1 and w 2 of B ¯ corresponding to the zero eigenvalue. In turn, these weights are functions of the design parameters γ i j , the wake-up probabilities q j , and the dropout probabilities p j i , i , j = 1 , , n . Therefore, it is clear that the initial characteristics of a selected sensor i will have larger influence on the asymptotic consensus value if the appropriate elements of w 1 and w 2 are increased. This can be achieved in two ways:
  • By reducing the values of all the elements in the i-th row of Γ ¯ , or
  • By increasing the values γ j i , j i , from the i-th column (keeping in mind that Γ ¯ must be row stochastic).
Probabilities q j can in certain situations also be adjusted since they depend on the rate of the local clock of node j. By increasing the clock rate of the node j, the influence of that node on the asymptotic calibration parameter values achieved at consensus, is also increased. Adjusting dropout probabilities 1 p j i might also be possible in certain situations: by decreasing the probability of a node i of receiving a message from a neighbor, we increase its influence on the asymptotic consensus. Hence, there are several design variables which can be adjusted so that the desired convergence point is achieved.

6.4. Macro Calibration for Networks with Reference Nodes

As discussed in the previous subsection, the selection of the weights in the matrix Γ ¯ is important for attaining the calibration goal of emphasizing a priori selected “good” sensors (“leaders”). Besides the described methods, this can be ultimately done by leaving the nodes from a set N f N with unchanged calibration parameters (reference nodes), and only apply the recursions (31) (or (5), or (26)) to the rest of the nodes i N N f . An example of a SN with such topology corresponding to the smart city example in Figure 1 is shown in Figure 2.
In practice, this situation emerges, for example, when a SN needs to be expanded, i.e., when several uncalibrated sensors needs to be added to an already calibrated SN. In this subsection, the convergence results for this case are presented assuming asynchronous calibration algorithm [14].
First, we treat the special case in which | N f | = 1 , i.e., there is only one reference sensor, and we want to calibrate the rest of the SN so that their calibrated outputs converge to the output of the reference sensor. For this case, all of the above results still hold, since the resulting communication graph will again have a spanning tree (with the reference node as a center node), which implies that (B3) (and (A3)) holds. Therefore, by applying the above convergence theorems, one concludes that the corrected gains and offsets ϕ ^ i ( k ) , i = 1 , , n , will converge to the same value, dictated by the “leader”.
In the general case, assume, without loss of generality, that N f = { 1 , 2 , , n f } , n f = | N f | > 0 , is the set of reference senors which have fixed parameters: ϕ i f = g i f f i f , i N f . Assume that ϕ ¯ f = [ ϕ 1 f T ϕ n f f T ] T . The calibration algorithms above are now applied in the same way, except that the reference nodes do not change the calibration parameters: θ ^ i ( k ) = θ ^ i ( k 1 ) for all i N f . Let N N f = { n f + 1 , , n } and let ϕ ^ v ( k ) = [ ϕ ^ n f + 1 ( k ) T ϕ ^ n ( k ) T ] T be the vector of all the calibration parameters to be tuned. In this case, all of the above theorems do not hold anymore, since, when n f > 1 , the communication graph does not necessarily satisfy (B3) (the graph does not have a center node anymore, since two or more reference nodes are not mutually reachable). Hence, a separate convergence theorem is needed which treats this case:
Theorem 8
([14]). Let Assumptions (B1)–(B4) be satisfied and let all the nodes from N N f be reachable from all the nodes in N f . Then the algorithm (31), in which γ i j = 0 for all i N f , provides convergence of ϕ ^ v ( k ) in the mean square sense and w.p.1 to the limit defined by
ϕ ^ v = ( Γ ¯ v I 2 ) 1 ( Γ ¯ f , v I 2 ) ϕ ¯ f ,
where matrices Γ ¯ v and Γ ¯ f , v are ( n n f ) × ( n n f ) and ( n n f ) × n f submatrices of matrix P c Γ ¯ , with indices i , j = n f + 1 , , n and i = n f + 1 , , n , j = 1 , , n f , respectively; P = diag { p 1 , , p n } and c is defined in (31).
It can be easily concluded that if the calibration parameters of all the reference nodes are the same, equal to some ϕ f , then the calibration parameters of the rest of the nodes will also converge to ϕ f . If this is not the case, it can be seen from (38) that the calibration parameters of different nodes will converge to different values. These values will be typically dictated by the reference sensors which are the closest to a given node.

6.5. Autonomous Gain Correction and Relationship with Time Synchronization

After presenting the most important aspects of the presented blind calibration methodology encompassing both noiseless and noisy environments, we will now make a few comments on the problem of its relationship with the algorithms for time synchronization in sensor networks, which has attracted a lot of attention (e.g., [40,41,42,43,44,45,46,47,69,70,71] and the references therein). Indeed, after coming back to the main measurement model, one easily realizes that in the case of time synchronization one has the form (1) with the absolute time t replacing the signal value x ( t ) . The estimation schemes used in the mentioned analogous time synchronizaion algorithms are, indeed, related to the estimation of the parameters of the calibration functions (2) based on the use of local time measurements; however, they consist of one separate recursion for the relative drift estimation and one separate recursion for the estimation of offsets, relying on the obtained relative drifts. In ref. [72] this scheme was reformulated in the light of the calibration problem and the above described methodology. One starts from the difference model Δ y i ( t ) = y i ( t + 1 ) y i ( t ) = α i Δ x ( t ) , where Δ x ( t ) = x ( t + 1 ) x ( t ) , and construct a gradient recursion for a ^ i following the above methodology, having in mind that Δ y i ( t ) does not depend on β i . On the other hand, the estimation of b i has to start from (1); it has the form of the recursion for b ^ i in (5), but should use a ^ i generated by first recursion. Such a combined gradient algorithm based on use of Δ y i ( t ) resembles typical time synchronization algorithms. In general, it is important to observe that the introduction of t instead of x ( t ) in the basic relation (1) suffers from the very basic problem that unboundedness of the linear function contradicts the requirements for boundedness of the second order moments of x ( t ) (typical for stochastic approximation algorithms) and that it is not possible to guarantee convergence of the obtained recursions. This indicates that formal transfers of methodologies from one domain to the other should be done with extreme caution.

7. Simulation Results

In this section we present the results of extensive simulations, illustrating that the algorithms are applicable to real-world problems involving sensor networks with decentralized architectures. The results are presented using several figures which demonstrate all of the important properties theoretically addressed in the previous sections. In all of the simulations, a SN with ten nodes, with randomly generated communication graph satisfying assumption (A3), has been chosen. To show that the algorithms are applicable to a large variety of sensor characteristics, the sensor parameters α i and β i have been generated randomly from uniform distribution with means one and zero, respectively, and with standard deviation 0.3.
In Figure 3 the corrected gains g ^ i ( t ) and offsets f ^ i ( t ) obtained by the presented algorithm (5) in the noiseless case are presented, for a preselected step size, equal for all the nodes, δ = 0.01 . It is clear from the figure that the convergence to consensus, and, hence, implicit asymptotic calibration is achieved with the asymptotic values close the desired (one for corrected gains, and zero for corrected offsets).
In Figure 4 the simulation results are presented for the situation in which the first node is set to be a reference node (“leader”) with perfect parameters α 1 = 1 and β 1 = 0 . Obviously, all of the rest of the nodes converge to this ideal characteristic.
In Figure 5 the corrected gains g ^ i ( t ) and offsets f ^ i ( t ) are depicted for the case in which all the theoretically discussed unreliabilities are included: communication dropouts with p = 0.2 for all the links in the network, normally distributed communication additive noise with variance 0.1, and normally distributed measurement noises with variances different for all the nodes, uniformly generated in the range ( 0 , 0.1 ) . In this case, in order to achieve convergence in the presence of noise, the time varying diminishing step-size sequences are chosen δ ( t ) = 0.01 / t 0.6 , equal for all the nodes. The algorithm which works in this case is the one based on the instrumental variables (26), so that the measured signal x ( t ) is assumed to be generated by a second order linear system with white noise at the input, resulting in a correlated sequence with zero mean and standard deviation one. It is obvious from the figure that the convergence is achieved despite the presence of all the introduced unreliabilities.
Next we simulate the asynchronous algorithm (31) which includes instrumental variables. It is assumed that the local clocks of all the nodes are driven by Poisson processes with the same rates.
In Figure 6, the corrected gains g ^ i ( k ) and offsets f ^ i ( k ) generated by (31) are depicted assuming the steps δ ( k ) = 0.01 / k 0.6 and the presence of communication dropouts with probabilities p i j = 0.2 for each link. The measurement noises and the signal x ( k ) are assumed to be the same as in Figure 5.
In Figure 7 the necessity of introducing the instrumental variables is demonstrated when the noise is present. The basic algorithm (5), without instrumental variables, has been simulated and the convergence is not achieved in this case. It can be seen that all the corrected gains g ^ i ( k ) slowly converge to zero which is very undesirable.
Figure 8 illustrates the case discussed in Section 6.4 in which there are two reference sensors in the given SN. It can be seen from the figure that, as predicted by Theorem 8, the consensus is not achieved, and that all the calibration parameters converge to different values, determined by Equation (38).

8. Conclusions

In this paper, we consolidate the existing results on distributed recursive blind macro-calibration based on consensus, present the algorithms in a unified way, and provide additional analysis of several important theoretical and practical issues. The studied algorithms are completely decentralized, not requiring any fusion center. It was shown that the algorithms successfully perform on lossy sensor networks, characterized by unreliable communications, limited only to the local neighbors. Convergence properties have been presented under both noiseless conditions, and the conditions typical for noisy and unreliable environments. The practically important case of asynchronous communication based on a broadcast gossip scheme has also been treated. Extensive discussions have been provided, explaining in detail several important practical issues: convergence rate, design of tunable network weights, and calibration of sensor networks with multiple reference nodes. Extensive simulation results illustrating the behavior of the algorithms have been presented.

Future Work

The presented results can be extended in several directions.
The first future direction, which is imposed naturally, is to extend the presented algorithms, or develop new ones, for the case when the nodes measure spatially varying but correlated signals. Treating this situation would drastically increase the practical applicability of the described methodology. The performance of such a scheme would highly depend on a priori knowledge about the interrelatedness of the measurements of different nodes.
Another direction of possible future work is to extend the results to include the possibility that individual nodes measure vector values (instead of scalars treated in this paper). This case would arise in practically frequent situations when there are multiple diverse sensors on each node, and when each sensor possibly measures different (overlapping) subsets of all the available sensed values. One possible idea for treating these situations is to use the wide existing literature in overlapping decentralized estimation (e.g., [20,73,74,75,76]). One typical example where this situation arises are networks of cameras with different view angles and scene coverage.
Third future direction emerges if one interprets the above results not in the context of calibration, but as pure synchronization (consensus) results for certain special types of linear time-varying systems. Indeed, the derivations and results of the presented convergence theorems open up a possibility of extending them to the case of synchronization of general higher order linear parameter-varying systems, which is a topic of high theoretical importance (e.g., [77] and references therein).
Finally, sensor networks often operate in hostile environments where battery usage and power consumption of the sensor nodes are of crucial importance. In this context it would be of high importance to analyze, for a given particular sensor network mission, how to formulate optimization problem and achieve optimal compromise between sensor network calibration and the nominal operation dictated by the given mission.

Author Contributions

Conceptualization, M.S.S., S.S.S., K.H.J., M.B. and L.M.C.-M.; Methodology, M.S.S., S.S.S., K.H.J., M.B. and L.M.C.-M.; Simulations, M.S.S. and S.S.S.; Validation, M.S.S., S.S.S., K.H.J., M.B. and L.M.C.-M.; Formal Analysis, M.S.S., S.S.S., K.H.J.; Resources, M.S.S., S.S.S., K.H.J., M.B. and L.M.C.-M.; Writing—Original Draft Preparation, M.S.S., S.S.S., K.H.J., M.B. and L.M.C.-M.; Writing—Review & Editing, M.S.S., S.S.S., K.H.J., M.B. and L.M.C.-M.; Visualization, M.S.S; Funding Acquisition, M.S.S., S.S.S., K.H.J., M.B. and L.M.C.-M.

Funding

This work was partially supported by Fundação para a Ciência e a Tecnologia under Grant CEECIND/02902/2017, Project UID/EEA/00066/2013, Project foRESTER PCIF/SSI/0102/2017, and Program Investigador FCT under Grant IF/00325/2015. The work by K. H. Johansson was supported in part by the Knut and Alice Wallenberg Foundation, the Swedish Research Council, and the Swedish Foundation for Strategic Research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kim, K.D.; Kumar, P.R. Cyber–physical systems: A perspective at the centennial. Proc. IEEE 2012, 100, 1287–1308. [Google Scholar]
  2. Holler, J.; Tsiatsis, V.; Mulligan, C.; Avesand, S.; Karnouskos, S.; Boyle, D. From Machine-to-Machine to the Internet of Things: Introduction to a New Age of Intelligence; Academic Press: Cambridg, MA, USA, 2014. [Google Scholar]
  3. Akyildiz, I.F.; Vuran, M.C. Wireless Sensor Networks; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  4. Gharavi, H.; Kumar, S.P. Special issue on sensor networks and applications. Proc. IEEE 2003, 91, 1151–1256. [Google Scholar] [CrossRef]
  5. Akyildiz, I.F.; Su, W.; Sankarasubramaniam, Y.; Cayirci, E. Wireless sensor networks: A survey. Comput. Netw. 2002, 38, 393–422. [Google Scholar] [CrossRef]
  6. Speranzon, A.; Fischione, C.; Johansson, K.H. Distributed and collaborative estimation over wireless sensor networks. In Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, CA, USA, 13–15 December 2006; pp. 1025–1030. [Google Scholar]
  7. Tomic, S.; Beko, M.; Dinis, R. Distributed RSS-AoA Based Localization with Unknown Transmit Powers. IEEE Wirel. Commun. Lett. 2016, 5, 392–395. [Google Scholar] [CrossRef]
  8. Tomic, S.; Beko, M.; Dinis, R.; Montezuma, P. Distributed algorithm for target localization in wireless sensor networks using RSS and AoA measurements. Pervasive Mob. Comput. 2017, 37, 63–77. [Google Scholar] [CrossRef]
  9. Tomic, S.; Beko, M.; Dinis, R. Distributed RSS-Based Localization in Wireless Sensor Networks Based on Second-Order Cone Programming. Sensors 2014, 14, 18410–18432. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Whitehouse, K.; Culler, D. Calibration as parameter estimation in sensor networks. In Proceedings of the 1st ACM International Workshop on Wireless sensor networks and applications, Atlanta, GA, USA, 28 September 2002; pp. 59–67. [Google Scholar]
  11. Whitehouse, K.; Culler, D. Macro-calibration in sensor/actuator networks. Mob. Netw. Appl. 2003, 8, 463–472. [Google Scholar] [CrossRef]
  12. Balzano, L.; Nowak, R. Blind calibration of sensor networks. In Proceedings of the 6th International Conference on Information Processing in Sensor Networks, Cambridge, MA, USA, 25–27 April 2007; pp. 79–88. [Google Scholar]
  13. Stanković, M.S.; Stanković, S.S.; Johansson, K.H. Distributed Blind Calibration in Lossy Sensor Networks via Output Synchronization. IEEE Trans. Autom. Control 2015, 60, 3257–3262. [Google Scholar] [CrossRef]
  14. Stanković, M.S.; Stanković, S.S.; Johansson, K.H. Asynchronous Distributed Blind Calibration of Sensor Networks under Noisy Measurements. IEEE Trans. Control Netw. Syst. 2018, 5, 571–582. [Google Scholar] [CrossRef]
  15. Stanković, M.S.; Stanković, S.S.; Johansson, K.H. Distributed Macro Calibration in Sensor Networks. In Proceedings of the 20th Mediterranean Conference on Control & Automation (MED), Barcelona, Spain, 3–6 July 2012; pp. 1049–1054. [Google Scholar]
  16. Stanković, M.S.; Stanković, S.S.; Johansson, K.H. Distributed Calibration for Sensor Networks under Communication Errors and Measurement Noise. In Proceedings of the IEEE 51st IEEE Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012; pp. 1380–1385. [Google Scholar]
  17. Stanković, M.S.; Stanković, S.S.; Johansson, K.H. A consensus-based distributed calibration algorithm for sensor networks. Serb. J. Electr. Eng. 2016, 13, 111–132. [Google Scholar] [CrossRef]
  18. Olfati-Saber, R.; Fax, A.; Murray, R. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef]
  19. Ohta, Y.; Siljak, D. Overlapping block diagonal dominance and existence of Lyapunov functions. J. Math. Anal. Appl. 1985, 112, 396–410. [Google Scholar] [CrossRef]
  20. Šiljak, D.D. Decentralized Control of Complex Systems; Academic Press: New York, NY, USA, 1991. [Google Scholar]
  21. Pierce, I.F. Matrices with dominating diagonal blocks. J. Econ. Theory 1974, 9, 159–170. [Google Scholar] [CrossRef]
  22. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar]
  23. Yu, C.; Xie, L. On Recursive Blind Equalization in Sensor Networks. IEEE Trans. Signal Process. 2015, 63, 662–672. [Google Scholar] [CrossRef]
  24. Nandi, A. Blind Estimation Using Higher-Order Statistics; Kluwer Academic Publishers: Boston, MA, USA, 1999. [Google Scholar]
  25. Balzano, L.; Nowak, R. Blind Calibration; Technical Report TR-UCLA-NESL-200702-01; Networked and Embedded Systems Laboratory, UCLA: Los Angeles, CA, USA, 2007. [Google Scholar]
  26. Lipor, J.; Balzano, L. Robust blind calibration via total least squares. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 4244–4248. [Google Scholar]
  27. Bilen, C.; Puy, G.; Gribonval, R.; Daudet, L. Convex Optimization Approaches for Blind Sensor Calibration Using Sparsity. IEEE Trans. Signal Process. 2014, 62, 4847–4856. [Google Scholar] [CrossRef] [Green Version]
  28. Bychkovskiy, V.; Megerian, S.; Estrin, D.; Potkonjak, M. A Collaborative Approach to In-Place Sensor Calibration. In Proceedings of the International Conference on Information Processing in Sensor Networks, Palo Alto, CA, USA, 22–23 April 2003; pp. 301–316. [Google Scholar]
  29. Wang, C.; Ramanathan, P.; Saluja, K.K. Moments Based Blind Calibration in Mobile Sensor Networks. In Proceedings of the 2008 IEEE International Conference on Communications, Beijing, China, 19–23 May 2008; pp. 896–900. [Google Scholar]
  30. Takruri, M.; Challa, S.; Yunis, R. Data Fusion Techniques for Auto Calibration in Wireless Sensor Networks. In Proceedings of the International Conference on Information Fusion, Seattle, WA, USA, 6–9 July 2009; pp. 132–139. [Google Scholar]
  31. Wang, Y.; Yang, A.; Li, Z.; Wang, P.; Yang, H. Blind drift calibration of sensor networks using signal space projection and Kalman filter. In Proceedings of the 2015 IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), Singapore, 7–9 April 2015; pp. 1–6. [Google Scholar]
  32. Wang, Y.; Yang, A.; Li, Z.; Chen, X.; Wang, P.; Yang, H. Blind Drift Calibration of Sensor Networks Using Sparse Bayesian Learning. IEEE Sens. J. 2016, 16, 6249–6260. [Google Scholar] [CrossRef]
  33. Wang, Y.; Yang, A.; Chen, X.; Wang, P.; Wang, Y.; Yang, H. A Deep Learning Approach for Blind Drift Calibration of Sensor Networks. IEEE Sens. J. 2017, 17, 4158–4171. [Google Scholar] [CrossRef] [Green Version]
  34. Yang, J.; Tay, W.P.; Zhong, X. A dynamic Bayesian nonparametric model for blind calibration of sensor networks. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 4207–4211. [Google Scholar]
  35. Lee, B.; Son, S.; Kang, K. A Blind Calibration Scheme Exploiting Mutual Calibration Relationships for a Dense Mobile Sensor Network. IEEE Sens. J. 2014, 14, 1518–1526. [Google Scholar] [CrossRef]
  36. Dorffer, C.; Puigt, M.; Delmaire, G.; Roussel, G. Blind Calibration of Mobile Sensors Using Informed Nonnegative Matrix Factorization. In Latent Variable Analysis and Signal Separation; Vincent, E., Yeredor, A., Koldovský, Z., Tichavský, P., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 497–505. [Google Scholar]
  37. Dorffer, C.; Puigt, M.; Delmaire, G.; Roussel, G. Blind mobile sensor calibration using an informed nonnegative matrix factorization with a relaxed rendezvous model. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 2941–2945. [Google Scholar]
  38. Schulke, C.; Caltagirone, F.; Zdeborova, L. Blind sensor calibration using approximate message passing. J. Stat. Mech. Theory Exp. 2015, 2015. [Google Scholar] [CrossRef] [Green Version]
  39. Kumar, D.; Rajasegarar, S.; Palaniswami, M. Geospatial Estimation-Based Auto Drift Correction in Wireless Sensor Networks. ACM Trans. Sens. Netw. 2015, 11, 50. [Google Scholar] [CrossRef]
  40. Stanković, M.S.; Stanković, S.S.; Johansson, K.H. Distributed drift estimation for time synchronization in lossy networks. In Proceedings of the 24th Mediterranean Conference on Control and Automation (MED), Athens, Greece, 21–24 June 2016; pp. 779–784. [Google Scholar]
  41. Stanković, M.S.; Stanković, S.S.; Johansson, K.H. Distributed time synchronization for networks with random delays and measurement noise. Automatica 2018, 93, 126–137. [Google Scholar] [CrossRef] [Green Version]
  42. Giridhar, D.; Kumar, P.R. Distributed clock synchronization over wireless networks: Algorithms and analysis. In Proceedings of the IEEE Conference on Decision and Control, San Diego, CA, USA, 13–15 December 2006; pp. 263–270. [Google Scholar]
  43. Sommer, P.; Wattenhofer, R. Gradient clock synchronization in wireless sensor networks. In Proceedings of the 2009 International Conference on Information Processing in Sensor Networks, San Francisco, CA, USA, 13–16 April 2009; pp. 37–48. [Google Scholar]
  44. Schenato, L.; Fiorentin, F. Average TimeSynch: A consensus-based protocol for time synchronization in wireless sensor networks. Automatica 2011, 47, 1878–1886. [Google Scholar] [CrossRef]
  45. Carli, R.; Chiuso, A.; Schenato, L.; Zampieri, S. Optimal Synchronization for Networks of Noisy Double Integrators. IEEE Trans. Autom. Control 2008, 56, 1146–1152. [Google Scholar] [CrossRef]
  46. Liao, C.; Barooah, P. Distributed clock skew and offset estimation from relative measurements in mobile networks with Markovian switching topologies. Automatica 2013, 49, 3015–3022. [Google Scholar] [CrossRef]
  47. Stanković, M.S.; Stanković, S.S.; Johansson, K.H. Distributed time synchronization in lossy wireless sensor networks. In Proceedings of the 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems, Santa Barbara, CA, USA, 13–14 September 2012; pp. 25–30. [Google Scholar]
  48. Ravazzi, C.; Frasca, P.; Tempo, R.; Ishii, H. Ergodic randomized algorithms and dynamics over networks. IEEE Trans. Control Netw. Syst. 2015, 2, 78–87. [Google Scholar] [CrossRef]
  49. Carron, A.; Todescato, M.; Carli, R.; Schenato, L. An asynchrnous consensus-based algorithm for estimation from noisy relative measurements. IEEE Trans. Control Netw. Syst. 2014, 1, 283–295. [Google Scholar] [CrossRef]
  50. Bolognani, S.; Favero, S.D.; Schenato, L.; Varagnolo, D. Consensus-based distributed sensor calibration and least-square parameter identification in WSNs. Int. J. Robust Nonlinear Control 2010, 20, 176–193. [Google Scholar] [CrossRef] [Green Version]
  51. Miluzzo, E.; Lane, N.D.; Campbell, A.T.; Olfati-Saber, R. CaliBree: A Self-calibration System for Mobile Sensor Networks. In Proceedings of the 4th IEEE International Conference on Distributed Computing in Sensor Systems, Santorini Island, Greece, 11–14 June 2008; pp. 314–331. [Google Scholar]
  52. Ramakrishnan, N.; Ertin, E.; Moses, R.L. Gossip-Based Algorithm for Joint Signature Estimation and Node Calibration in Sensor Networks. IEEE J. Sel. Top. Signal Process. 2011, 5, 665–673. [Google Scholar] [CrossRef]
  53. Buadhachain, S.O.; Provan, G. A model-based control method for decentralized calibration of wireless sensor networks. In Proceedings of the 2013 American Control Conference, Washington, DC, USA, 17–19 June 2013; pp. 6571–6576. [Google Scholar]
  54. Ren, W.; Beard, R. Consensus seeking in multi-agent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 2005, 50, 655–661. [Google Scholar] [CrossRef]
  55. Ljung, L.; Söderström, T. Theory and Pracice of Recursive Identification; MIT Press: Cambridge, MA, USA, 1983. [Google Scholar]
  56. Xiao, Y.; Xiong, Z.; Niyato, D.; Han, Z. Distortion minimization via adaptive digital and analog transmission for energy harvesting-based wireless sensor networks. In Proceedings of the IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, USA, 14–16 December 2015; pp. 518–521. [Google Scholar]
  57. Chen, H.F. Stochastic Approximation and Its Applications; Kluwer Academic: Dordrecht, The Netherlands, 2002. [Google Scholar]
  58. Li, T.; Zhang, J.F. Consensus conditions of multi agent systems with time varying topologies. IEEE Trans. Autom. Control 2010, 55, 2043–2056. [Google Scholar] [CrossRef]
  59. Huang, M.; Manton, J.H. Stochastic consensus seeking with noisy and directed inter-agent communications: Fixed and randomly varying topologies. IEEE Trans. Autom. Control 2010, 55, 235–241. [Google Scholar] [CrossRef]
  60. Ljung, L. System Identification—Theory for the User; Prentice Hall International: Englewood Cliffs, NJ, USA, 1989. [Google Scholar]
  61. Söderström, T.; Stoica, P. System Identification; Prentice Hall International: Hemel Hempstaed, UK, 1989. [Google Scholar]
  62. Aysal, T.C.; Yildriz, M.E.; Sarwate, A.D.; Scaglione, A. Broadcast gossip algorithms for consensus. IEEE Trans. Signal Process. 2009, 57, 2748–2761. [Google Scholar] [CrossRef]
  63. Nedić, A. Asynchronous broadcast-based convex optimization over a network. IEEE Trans. Autom. Control 2011, 56, 1337–1351. [Google Scholar] [CrossRef]
  64. Bradley, R.C. Basic Properties of Strong Mixing Conditions a Survey and Some Open Questions. Probab. Surv. 2005, 2, 107–144. [Google Scholar] [CrossRef]
  65. Ibragimov, I. Some limit theorems for stochastic processes stationary in the strict sense. Dokl. Akad. Nauk SSSR 1959, 125, 711–714. [Google Scholar]
  66. Rosenblatt, M. A central limit theorem and a strong mixing condition. Proc. Natl. acad. Sci. USA 1956, 42, 43–47. [Google Scholar] [CrossRef] [PubMed]
  67. Godsil, C.; Royle, G. Algebraic Graph Theory; Springer: New York, NY, USA, 2001. [Google Scholar]
  68. Borkar, V.; Meyn, S.P. The ODE method for convergence of stochastic approximation and reinforcement learning. SIAM J. Control Optim. 2000, 38, 447–469. [Google Scholar] [CrossRef]
  69. Stanković, M.S.; Stanković, S.S.; Johansson, K.H. Distributed Offset Correction for Time Synchronization in Networks with Random Delays. In Proceedings of the European Control Conference, Limassol, Cyprus, 12–15 June 2018. [Google Scholar]
  70. Tian, Y.P.; Zong, S.; Cao, Q. Structural modeling and convergence analysis of consensus-based time synchronization algorithms over networks: Non-topological conditions. Automatica 2016, 65, 64–75. [Google Scholar] [CrossRef]
  71. Chen, J.; Yu, Q.; Zhang, Y.; Chen, H.H.; Sun, Y. Feedback-Based Clock Synchronization in Wireless Sensor Networks: A Control Theoretic Approach. IEEE Trans. Veh. Technol. 2010, 59, 2963–2973. [Google Scholar] [CrossRef]
  72. Stanković, M. Distributed Asynchronous Consensus-based Algorithm for Blind Calibration of Sensor Networks with Autonomous Gain Correction. IET Control Theory Appl. 2018, 12, 2287–2293. [Google Scholar] [CrossRef]
  73. Stanković, S.S.; Stanković, M.S.; Stipanović, D.M. Consensus based overlapping decentralized estimator. IEEE Trans. Autom. Control 2009, 54, 410–415. [Google Scholar] [CrossRef]
  74. Stanković, S.S.; Stanković, M.S.; Stipanović, D.M. Consensus Based Overlapping Decentralized Estimation with Missing Observations and Communication Faults. Automatica 2009, 45, 1397–1406. [Google Scholar] [CrossRef]
  75. Stanković, M.S.; Stanković, S.S.; Stipanović, D.M. Consensus-based decentralized real-time identification of large-scale systems. Automatica 2015, 60, 219–226. [Google Scholar] [CrossRef]
  76. Stanković, S.S.; Šiljak, D.D. Model abstraction and inclusion principle: A comparison. IEEE Trans. Autom. Control 2001, 8, 816–832. [Google Scholar] [CrossRef]
  77. Seyboth, G.; Schmidt, G.; Allgöwer, F. Output synchronization of linear parameter-varying systems via dynamic couplings. In Proceedings of the IEEE 51st IEEE Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012; pp. 5128–5133. [Google Scholar]
Figure 1. An example sensor network used in smart-city applications with decentralized communication topology. The inter-node communication is performed according to the depicted directed graph. The introduced distributed calibration algorithm achieves asymptotic calibration of all the sensor nodes in the network without using any type of fusion center.
Figure 1. An example sensor network used in smart-city applications with decentralized communication topology. The inter-node communication is performed according to the depicted directed graph. The introduced distributed calibration algorithm achieves asymptotic calibration of all the sensor nodes in the network without using any type of fusion center.
Sensors 18 04027 g001
Figure 2. An example sensor network used in smart-city applications with multiple (four) reference nodes. The reference nodes (RNs) have fixed calibration parameters: only the rest of the nodes implement the given distributed sensor calibration recursions.
Figure 2. An example sensor network used in smart-city applications with multiple (four) reference nodes. The reference nodes (RNs) have fixed calibration parameters: only the rest of the nodes implement the given distributed sensor calibration recursions.
Sensors 18 04027 g002
Figure 3. Noiseless synchronous algorithm without references: convergence to consensus is achieved for corrected gains and corrected offsets.
Figure 3. Noiseless synchronous algorithm without references: convergence to consensus is achieved for corrected gains and corrected offsets.
Sensors 18 04027 g003
Figure 4. Noiseless synchronous algorithm with one reference sensor: convergence to the reference is chieved.
Figure 4. Noiseless synchronous algorithm with one reference sensor: convergence to the reference is chieved.
Sensors 18 04027 g004
Figure 5. The modified algorithm (25): convergence to consensus is achieved for corrected gains and corrected offsets despite measurement noise presence.
Figure 5. The modified algorithm (25): convergence to consensus is achieved for corrected gains and corrected offsets despite measurement noise presence.
Sensors 18 04027 g005
Figure 6. The asynchronous algorithm based on instrumental variables without reference sensors: convergence to consensus is achieved for corrected gains and corrected offsets.
Figure 6. The asynchronous algorithm based on instrumental variables without reference sensors: convergence to consensus is achieved for corrected gains and corrected offsets.
Sensors 18 04027 g006
Figure 7. Stochastic gradient algorithm: convergence to consensus is not achieved.
Figure 7. Stochastic gradient algorithm: convergence to consensus is not achieved.
Sensors 18 04027 g007
Figure 8. The asynchronous algorithm with two reference sensors with different characteristics: both the corrected gains and the corrected offsets converge to different values determined by (38).
Figure 8. The asynchronous algorithm with two reference sensors with different characteristics: both the corrected gains and the corrected offsets converge to different values determined by (38).
Sensors 18 04027 g008

Share and Cite

MDPI and ACS Style

Stanković, M.S.; Stanković, S.S.; Johansson, K.H.; Beko, M.; Camarinha-Matos, L.M. On Consensus-Based Distributed Blind Calibration of Sensor Networks. Sensors 2018, 18, 4027. https://doi.org/10.3390/s18114027

AMA Style

Stanković MS, Stanković SS, Johansson KH, Beko M, Camarinha-Matos LM. On Consensus-Based Distributed Blind Calibration of Sensor Networks. Sensors. 2018; 18(11):4027. https://doi.org/10.3390/s18114027

Chicago/Turabian Style

Stanković, Miloš S., Srdjan S. Stanković, Karl Henrik Johansson, Marko Beko, and Luis M. Camarinha-Matos. 2018. "On Consensus-Based Distributed Blind Calibration of Sensor Networks" Sensors 18, no. 11: 4027. https://doi.org/10.3390/s18114027

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop