Next Article in Journal
Fast Synchronization Scheme Using 2-Way Parallel Rendezvous in IEEE 802.15.4 TSCH
Previous Article in Journal
Transmission Characteristics Analysis and Compensation Control of Double Tendon-sheath Driven Manipulator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Algorithms of Distributed Learning and Distributed Estimation about Intelligent Wireless Sensor Network

College of Information Engineering, Shanghai Maritime University, 1550 Haigang Ave, Shanghai 201306, China
Sensors 2020, 20(5), 1302; https://doi.org/10.3390/s20051302
Submission received: 22 November 2019 / Revised: 15 February 2020 / Accepted: 20 February 2020 / Published: 27 February 2020
(This article belongs to the Section Sensor Networks)

Abstract

:
The intelligent wireless sensor network is a distributed network system with high “network awareness”. Each intelligent node (agent) is connected by the topology within the neighborhood which not only can perceive the surrounding environment, but can adjusts its own behavior according to its local perception information to constructs a distributed learning algorithms. Therefore, three basic intelligent network topologies of centralized, non-cooperative, and cooperative are intensively investigated in this paper. The main contributions of the paper include two aspects. First, based on algebraic graph, three basic theoretical frameworks for distributed learning and distributed parameter estimation of cooperative strategy are surveyed: increment strategy, consensus strategy, and diffusion strategy. Second, based on classical adaptive learning algorithm and online updating law, the implementation process of distributed estimation algorithm and the latest research progress of above three distributed strategies are investigated.

1. Introduction

With the development of automation and wireless communication technology, data acquisition, data processing, data transmission, data control and data storage are not only convenient and fast, but also safe and reliable in a complicated network [1,2]. Combining distributed computing, computer science, automatic control theory, wireless sensor, and microelectronics manufacturing, intelligent network is a kind of large-scale distributed network systems, which is the integration of data-aware, intelligent learning, dynamic optimization, and wireless data communication [3,4]. There are many kinds of intelligent networks, which mainly includes wireless sensor network, cognitive network sensing, cognitive radio network, cognitive radar, smart grid, multi-robot network and so on [5]. Intelligent networks are an emerging interdiscipline, which not only involves machine learning, artificial intelligence, sensor networks, cognitive radio, pattern recognition and optimization theory, but also has a very high application prospect and research value [6,7].
In intelligent wireless sensor network (IWSN), intelligent nodes (agents) are connected through the neighborhood within the network topology. Therefore, the real-time online information processing, data analysis, and dynamic optimization of agents are accomplished [8,9]. Intelligent network is a highly intelligent distributed network system, in which each agent is able to self-recognition, self-judgment, and self-adjusting by network topology [10]. At individual level, intelligent network requires each agent not only can perceive its environment information, but also can perform network communication through its connection to other agents to realize ongoing reinforcement learning [11]. In the organization of the system characteristics, agent can adjust their behavior according to its local “awareness” information [12].
Intelligent nodes are terms such as autonomous agents, which mainly include sensors, processors, actuators, and etc. With the aid of algebraic graph theory, theses nodes are not only communication by network topology, but also allow us to exchange information within their neighborhood and to accomplish network global tasks by online learning [13]. So the IWSN node with distributed manner have “network awareness”, which constructed complicated intelligent network systems. These “network awareness” have the main four characteristic [3]: (1) data awareness, (2) spatial awareness, (3) group awareness, and (4) context-awareness. Therefore, combined machine learning with computational intelligence algorithm, the ISWN integrate with the technology of parameter estimation, adaptive filter, machine learning, online sparse kernel algorithm, system identification, cooperative control of multi-agent systems, distributed optimization, differential game, deep reinforcement learning, and intelligent data analysis.
IWSN has character of scalability and expandability [14]. Machine learning and artificial intelligence algorithms have become the new frontier toward analyzing intelligent network [15]. Therefore, the revolutionary tools of big data processing is critical to expanding the territory of this intelligence system paradigm [16]. For communication nodes and communications links, intelligent network has also robustness [17].
There is a large amount of data in IWSN, which have the following characteristics: large volume of data, various data types, and low-value density of data. Furthermore, the collecting, analysis, saving, and fast processing real-time data can be conducted by agents with intelligent sensors in the intelligent network [18]. Under a big data environment, agents are constantly sharing and there is diffusion of information through the network topology among intelligence nodes, which can not only monitor uncertain real-time data, but also can dynamically adjust the network topology. Therefore, in large volume and various data, it is an important research direction of distributed estimation and distributed learning, in which machine learning can efficiently excavate the laws hidden in the big data to design a distributed dynamic optimization algorithm [19].
Since the extensive application of various algorithms in IWSN, the amount of data and the complexity of data are increased correspondingly [20]. Modern computing equipment is a very complex system, which enables it to store more complex data and to deal with a larger amount of data. Then, the computational complexity is also increasing simultaneously [21]. In addition, a variety of running programs in IWSN can process many complex data streams to study prediction models and can extract the inherent model from noise data. In order to ensure data resolution and to avoid data loss, it is necessary to analyze the data flow in IWSN in real time [22]. Therefore, massive, multidimensional, and highly speeding data streams make it impractical to use existing learning and filtering algorithms to analyze real-time data [23].
In terms of big data analysis, behavioral pattern recognition, and information evolution, the reason for the rapid development of machine learning is its ability to describe potential relationships among large-scale data which can effectively solve all kinds of complex problems [24,25,26]. Furthermore, in the process of machine learning, based on the potential mapping relationship between expected output data and input data, the commonly used machine learning algorithms can be divided into three types: unsupervised learning, supervised learning, and reinforcement learning [27]. So, with the advent of the era of cloud computing and big data, the training inefficiency can be alleviated by the large increase of computing ability and the risk of over-fitting can be reduced by the large increase of training data [28]. Therefore, as the representative of large-scale learning algorithms such as broad learning [29] and deep learning [30] have been getting more and more attention. Moreover, as a type of the reinforcement learning algorithm, adaptive dynamic programming has achieved gratifying results in the design of optimal control system [31,32,33] and other control-related issues [34,35,36,37,38].
IWSN has highly cognitive function [39,40]. Using distributed collaboration of agents, each intelligent node is able to communicate with other intelligent nodes in the neighborhood which have access to environmental information and to process and to manage a large amount of real-time data in the intelligent network [41,42]. Thus, the relevant distributed estimation algorithm can be designed and the sparse processing and dynamic optimization of IWSN can be realized [43]. Furthermore, the quickness, real-time, accuracy, and reliability of data transmission can be ensured simultaneously in the realization of network connectivity [44,45].
In conclusion, by integrating with signal processing, wireless sensor network, machine learning, data-sparse algorithm, dynamic optimization, and control theory, investigating and developing distributed learning and distributed estimation over networks has become an important problem in the practical engineering application [46,47]. Furthermore, in the true environment of IWSN, distributed estimation algorithm could be able to deal with some problems of uncertainty phenomenon, which mainly includes the dynamic change of topological structure, quantization errors, communication link failure, packet losses, serious distortion of communication channel, and inter-symbol interference (ISI) [48,49].
So, it is very urgent to analyze the latest research progress on cooperation, noncooperation, real-time adaptation, online learning, self-healing, and self-organization about intelligent networks. On this basis, the completed results of scientific research in recent years have been surveyed and the future research direction has also been prospected.
The theoretical framework of “Distributed Learning and Distributed Estimation” includes the following point. (1) Three basic theoretical frameworks for distributed learning and parameter estimation of intelligent network topologies are intensively investigated: centralized topology, non-cooperative topology, and cooperative topology. (2) Based on algebraic graph, three basic cooperative strategies are proposed: increment strategy, consensus strategy, and diffusion strategy. (3) Based on the classical adaptive learning algorithm and online updating equation, the algorithm implementation process and the latest research progress of the above three distributed strategies are intensively studied.
The structure of the paper is as follows. Section 1 is introduces the intelligent network. In Section 2, the main distributed topology of intelligence network is discussed. Section 3 introduces three distributed strategies and the online updating rule. Section 4 describes the implementation of a distributed learning algorithm in detail. Finally, the conclusion and future research perspective are in the Section 5.

2. Basic Topologies of IWSN

Because of power supply, computing complexity, communication bandwidth, and limited resources, the IWSN has been restricted in practical applications and this has affected the future development [50]. Thus, in order to solve these problems, IWSN should be designed according to the following principles [51].
(1) Development of intelligent sensor nodes with awareness that can conduct self-identification and self-judgment of each agent.
(2) Development of distributed adaptive dynamic optimization algorithm that can be able to online learning and distributed estimation of intelligent network.
(3) Development of Ad Hoc network with awareness that the connectivity of intelligent networks can be guaranteed under dynamic topology.
The IWSN inevitably has various dynamic problems in practical application. Under the condition of limited resources and limited time, IWSN needs to solve a series of problems with complex real-time dynamic environment of coordination, conflict resolution, network resource allocation, and task scheduling [52]. So, by building intelligent network system based on intelligent sensors, each agent can update their environmental information constantly. However, the disturbances of environmental information can cause the dynamic change of multi-agent’s behavior and even lead to instability [53]. Thus, under the influence of various factors, the intelligent network will produce structure changes and cause the dislocation of network or information loss [54].
Based on the topology of communication, there are three basic structures of IWSN: centralized topology, non-cooperative topology, and cooperative topology, which are shown in the Figure 1. Figure 1a is centralized topology. Figure 1b,c are non-cooperative topology and cooperative topology respectively.

2.1. Centralized Topology of IWSN

In the centralized topology, intelligent network has a data fusion center. Thus, each intelligent node can send data to the fusion center respectively. Furthermore, the characteristic of data fusion is centralized data processing, in which the collection and processing of various intelligent transmitted data can be achieved [55]. Then, the fusion center performs computations and sends the processed data back to each agent. The centralized topology is shown in Figure 1a.
While the centralized structure has a powerful information processing center and effective transmission of data over the topology, the centralized topology has its limitations [56]. First of all, in the real-time communication system, agents collect a large amount of data continually and exchange data between the data fusion center and each agent. Because these communications are all wireless communication mode, it needs some important dynamic source routing. So, the manufacturing cost is very expensive. Secondly, because of the needs of privacy and secrecy, each agent will not share all of its own data to the data fusion center in the highly intelligent wireless sensor system. What’s more important is the centralized topology has a critical flaw. When the data fusion center is faultly, all the data will not be able to transmit timely and effectively which will give rise to the whole network system breakdown directly.

2.2. Cooperation Topology and Non-Cooperative Topology

In practical engineering, the IWSNk is generally designed by the distributed topology structure. For the distributed topology, each agent is linked with each other through a certain topological structure, which ensure to achieve information sharing and to transport information effectively among agents and their neighborhood agents. In general, the distributed topologies can be divided into two categories: cooperation topology and non-cooperative topology [50].
In non-cooperative topology, agents are all independent of each other to pursue their own expectations. Each agent is sharing data and its behavior by itself [57,58].
However, today’s all kinds of existing distributed network, such as internet network, smart grid, traffic network, wireless sensor network (WSN), biological information network mostly exists in the way of cooperation [59]. For the real network system, on the one hand, intelligent network adopt cooperation strategy to improve the system of optimality, to enhance network robustness, and to strengthen the fault self-recovery ability. Therefore, the cooperative strategy is more humanization and personalization in privacy and confidentiality. On the other hand, in the decentralized location, each agent can be easier to get a lot of online data, which can increase the distributed information processing capacity of the network. Furthermore, using the distributed topology, agent can process data for data analysis and data mining, which not only improve the learning ability of the network, but also provide a very effective method for distributed estimation of IWSN [60].
The IWSN with distributed cooperative strategy is shown in Figure 2, where the Figure 2a is increment strategy and the Figure 2b is the diffusion strategy.

3. Cooperative Distributed Estimation Strategy

Algebraic graph theory is an important branch of graph theory, which mainly uses algebraic methods and results to study related scientific problems by graph theory [61]. Therefore, algebraic graph theory is the theoretical basis for the analysis of IWSN. This means that each intelligent node (agent) is the vertex of the graph and the edge in the graph represents the communication structure between agents [62]. In order to study the topological construction and topological properties about intelligent network, some matrix theories about graph theory are introduced, which main include the adjacent matrix of a graph, correlation matrix, and the Laplace matrix, etc. [63]. Furthermore, in the theory of algebraic graph, one of the main research areas is whether and how the topological properties of graphs can be reflected by the algebraic properties of these matrices, so that the algebraic properties of these matrices can be studied by using matrix theory  [64]. Then, the topological properties of IWSN are obtained. Particularly, based on algebraic graph theory, the research on consensus protocol and cooperative control of multi-agent network system is a hot and difficult point in intelligent network system [65].
In IWSN, each intelligent node (Agent) not only can perform data collection and data mining, but also can conduct distributed information processing [64]. Agent collects all kinds of online data related to its own parameters, observation noise, and various data of other agents connected to its topology for online parameter estimation [66]. In this way, if each agent can obtain data of the whole network, the distributed estimation algorithm can accurately estimate various parameters. Obviously, the effective implementation of distributed estimation algorithm is mainly dependent on the cooperation strategies among agents. In the structure of the cooperative strategy, existing literatures are shown that there are three fundamental distributed estimation frameworks: incremental strategies, diffusion strategies, and consensus strategies [50].

3.1. The Problem of Distributed Estimation

Consider an IWSN with N intelligent node (Agent) over distributed spatial domain, which labeled k = 1 , 2 , , N and is shown in Figure 3. The topology of an IWSN is defined as an undirected graph. Thus, let G be an undirected graph. The V and ε are defined set of nodes and edges respectively. Agent l is called a neighbor of agent k if agent k and agent l can receive information from each other, that is l , k ε . The neighborhood of agent k is denoted by N k , which denotes the set of nodes connected to node k: N k = l l , k ε . The agents in the network will estimate an unknown M × 1 vector θ * . At each time i, each agent k can collect a zero-mean scalar measurement d k i and a zero-mean 1 × M regression vector ξ k i with a positive-definite convergence matrix R ξ , k = E [ ξ k T i ξ k i ] > 0 , where E is the mathematical expectation. Assuming the data d k i , ξ k i are satisfy the linear regression model [67]
d k i = ξ k i θ * + ε k i , i 0 , k = 1 , , N ,
where ε k i is a measurement noise with zero-mean and variance σ ε , k 2 = E ε k i ε k T i , which is assumed to be temporally white and spatially independent. The θ * is optimal estimator.
Assuming the regressors ξ k i is temporally white and spatially independent, that is
E [ ξ k T i ξ l j ] = R ξ , k · δ k l · δ i j
in terms of Kronecker delta function
δ k l = 1 , k = l 0 , k l
where the noise ε k i and the regressors ξ l j are assumed to be independent of each other for all  k , l , i , j .
The mean-square-error (MSE) cost function associated with each agent k is defined as [68]
J k θ = E d k i - ξ k i · θ 2 .
The main objective of the intelligent network is to estimate θ * in a distributed topology by the online learning process. Therefore, for estimating θ * , the agents should minimize the following global cost function
J g l o b θ k = 1 N E d k i - ξ k i · θ 2 .
Supposing the individual agent cost function J k θ has convex character and the estimation process d k i and ξ k i are jointly stationary, the unique global minimum θ k * of (5) is well known Wiener filter estimate
θ k * = k = 1 N R ξ , k - 1 · k = 1 N r d ξ , k ,
where r d ξ , k E [ d k i · ξ k i ] and θ k * is optimal estimation of agent k [69].

3.2. Noncooperative Distributed Estimation Strategy

Based on the traditional stochastic steepest-descent algorithm, agent k satisfies the following form to determine the solution [57,58]
θ k i = θ k i - 1 - μ k θ J k θ k i - 1 * ,
where μ k > 0 is a constant step size parameter by agent k and θ J k · is the gradient vector of J k θ with respect to the variable θ . At time i, let θ k i is the estimate of θ k * for agent k. Under the topology of non-cooperative, each agent attempts to estimate θ * by itself. Based on any initial condition θ k 0 , the gradient descent recursive algorithm satisfy the following equation
θ k i = θ k i - 1 + μ k r d ξ , k - R ξ , k · θ k i - 1 , i 0 .
In order to ensure the convergence of non-cooperative recursive learning algorithm, the μ k is selected within the interval 0 , 2 / λ max R ξ , k .
Since the moments r d ξ , k and R ξ , k are all stochastic, it is necessary to find a new approach that permit each agent to approximate the unavailable moments r d ξ , k , R ξ , k [70].
In general, one of the simplest methods used is the following instantaneous approximations
R ξ , k ξ k i T ξ k i , r d ξ , k d k i ξ k i T .
Thus, the corresponding stochastic-gradient recursive algorithm satisfy
θ k i = θ k i - 1 + μ k · ξ k i - 1 T d k i - 1 - ξ k i - 1 θ k i - 1 ,
which is the well-known least-mean-squares (LMS) adaptive algorithm.

3.3. Cooperative Distributed Strategy

For cooperative strategies, agents are permitted to interact with their neighbors. In this way, the global optimization problem of IWSN can be defined as
m i n i m i z e θ J g l o b θ k = 1 N J k θ = k = 1 N E d k i - ξ k i θ 2 ,
for which θ * is a unique global optimal solution.
In general, there are three types of cooperative strategies about the intelligent network: increment strategy, consensus strategy, and diffusion strategy [71].

3.3.1. Increment Strategy

For the incremental strategy, if there is a cycle topology in the intelligent network, the number of agents along the trajectory is from 1 to N. In this strategy, the signal is transmitted from one intelligent node to the next node in the cycle edge until all nodes are obtained. The topology of incremental strategy is shown in Figure 4a. Thus, the entirely distributed solution of increment strategy can only access to signal from its local neighbors [72,73].
Therefore, the incremental strategy for online learning is as follows. For each time instant i 0 , the fictitious boundary condition θ 0 i = θ i - 1 is set. When signal cycle over intelligent nodes k = 1 , 2 , , N , intelligent node k receives θ k - 1 i from its preceding neighbor k - 1 . At this time, the updating rule of intelligent node k satisfy
θ k i = θ k - 1 i - μ N θ T J k ^ θ k - 1 i ,
where μ > 0 is a small step-size and setting θ i = θ N i at the end of cycle. According above incremental strategy, the true gradient vector θ T J k · is replaced by an instantaneous approximation θ T J k ^ ·  [74,75,76,77]. The algorithm implementation of increment strategy for distributed learning is written in Algorithm 1.
Algorithm 1: Increment strategy for distributed learning.
1:
for each time i 0 do
2:
set the fictitious boundary condition at θ 0 i θ i - 1 ;
3:
cycle over intelligent node k = 1 , 2 , , N ;
4:
intelligent node k receives θ k - 1 i from its preceding neighbor k - 1 ;
5:
according to Equation (12), intelligent node k online learning;
6:
end
7:
θ i θ N i ;
8:
end for

3.3.2. Consensus Strategy

In the consensus strategy, each agent k performs two steps at each iteration i: (1) it aggregates the iteration from its neighbors; (2) updates this aggregate value by negative of conjugate gradient vector evaluated at its existing iterate. The topology of the consensus strategy for the online learning algorithm is shown in Figure 4b.
Given instant time i 0 , each intelligence node k = 1 , 2 , , N performs the updating rule of consensus strategy [78]
ψ k i + 1 = l N k a l k θ l i θ k i + 1 = ψ k i - μ k θ T J k ^ θ k i .
For each agent k = 1 , 2 , , N , the combination coefficients a l k are nonnegative scalars, which satisfy a l k 0 , l = 1 N a l k = 1 , and a l k = 0 if l N k [79].
The above condition means that the combination matrix A = a l k satisfies A T · 1 = 1 , where 1 denotes the vector with all entries equal to one. Thus, A is called the left-stochastic matrix [80,81,82]. Therefore, the algorithm implementation process of consensus strategy for distributed learning is shown in Algorithm 2.
Algorithm 2: Consensus strategy for distributed learning.
1:
for each time i 0 do
2:
based on the neighborhood N k , intelligent node k compute the combination coefficients  a l k ;
3:
according to Equation (13), each intelligent node k = 1 , 2 , , N conduct online learning;
4:
end for
It should be noted that the consensus protocol in the networked multi-agent systems is defined by the rules for the interaction of agents in the exchange of information between an agent and its adjacent agent. That is, with the evolution of time, all states of agents in the multi-agent system will tend to be the same point [52,65].

3.3.3. Diffusion Strategy

Generally speaking, there are two basic forms of distributed estimator with diffusion strategy: the adapt-then-combine (ATC) structure and the combine-then-adapt (CTA) structure [50,60,69,71].
The topology of diffusion strategies for online learning algorithm are shown in Figure 5. Figure 5a shows the ATC strategy and Figure 5b shows CTA strategy.
Let N k denote the neighborhood of agent k. The optimal estimation with the ATC diffusion strategy needs solve Equation (11). Therefore, for each time instant i 0 , the online learning algorithm of each agent k = 1 , 2 , , N with ATC diffusion strategy satisfy
ψ k i + 1 = θ k i - μ k l N k c l k θ J l ^ θ k i θ k i + 1 = l N k a l k ψ l i + 1
Furthermore, the online learning algorithm of each agent k = 1 , 2 , , N with the CTA diffusion strategy satisfy
ψ k i = l N k a l k θ l i θ k i + 1 = ψ k i - μ k l N k c l k θ J l ^ θ k i ,
Algorithm 3: Diffusion strategy for distributed learning (ATC).
1:
for each time i 0 do
2:
based on the neighborhood N k , intelligent node k compute the combination coefficients a l k and c l k ;
3:
according to Equation (14), each intelligent node k = 1 , 2 , , N conduct online learning;
4:
end for
where θ J l ^ · is an approximation of the true gradient vector θ J l · and μ k is small constant step-size parameter.
Algorithm 4: Diffusion strategy for distributed learning (CTA).
1:
for each time i 0 do
2:
based on the neighborhood N k , intelligent node k compute the combination coefficients a l k and c l k ;
3:
according to Equation (15), each intelligent node k = 1 , 2 , , N conduct online learning;
4:
end for
Therefore, the algorithms of diffusion strategy for distributed learning are shouwn in Algorithm 3 and Algorithm 4. Algorithm 3 is the implementation of ATC, and Algorithm 4 is the implementation of CTA.
In addition, the a l k , c l k are non-negative coefficient which satisfy the following conditions
a l k 0 , l = 1 N a l k = 1 ; a l k = 0 , l N k c l k 0 , l = 1 N c l k = 1 ; c l k = 0 , l N k
Furthermore, if the coefficients a l k , c l k are collected into N × N matrices C c l k and A a l k , we can get a right-stochastic matrix and a left-stochastic matrix, respectively.

3.4. The Differences among Three Distributed Estimation Algorithms

Based on the non-cooperative strategy, consensus strategy, and diffusion strategy, a unifying online parameter estimation algorithm can be described the above three strategies [50]. According to three sets of a 0 , l k , a 1 , l k , a 2 , l k , the unifying equation can be written as [71]
ϕ k i = l N k a 1 , l k θ l i ψ k i + 1 = l N k a 0 , l k ϕ l i - μ k θ * J k ^ ϕ k i θ k i + 1 = l N k a 2 , l k ψ l i
where ϕ k i , ψ k i + 1 is M × 1 intermediate variables.
In addition, A 0 = a 0 , l k , A 1 = a 1 , l k , and A 2 = a 2 , l k are defined as N × N matrices with non-negative entries, respectively, which satisfy the condition of [79] simultaneously. Thus, the A 0 , A 1 , A 2 have the left-stochastic property of the matrix, which satisfies A 0 T 1 = 1 , A 1 T 1 = 1 , and A 2 T 1 = 1 . Furthermore, when l N k , any combination weight a 0 , l k , a 1 , l k , a 2 , l k will be equal to zero. Therefore, by defining the product of matrix P = A 0 A 1 A 2 , the different distributed strategies can be defined by selecting different matrices A 0 , A 1 , A 2 . So, the difference among online updating equations of three distributed strategies can be written as [71]
Non-cooperative: A 1 = A 0 = A 2 = I N P = I N ;
Consensus: A 0 = A , A 1 = I N = A 2 P = A ;
CTA diffusion: A 1 = A , A 2 = I N = A 0 P = A ;
ATC diffusion: A 2 = A , A 1 = I N = A 0 P = A .

3.5. Extension Analysis of Distributed Estimation Algorithm

The above-mentioned distributed learning and estimation algorithm of IWSN are based on the condition of constant step size so that the agent can continuously carry out online adaptive learning and distributed estimation according to the data flow. Furthermore, the extension algorithm associated IWSN is basically developed further by these three strategies. Therefore, several other important aspects of the existing intelligent network distributed algorithm are also need to be further studied.

3.5.1. Distributed Strategy with Sparse and Regularization

In the IWSN, agent can have access to real-time online data through its sensor [83]. The unknown parameters of the system are identified from input–output data to construct a recursive algorithm in order to search the optimization strategy and to design adaptive dynamic optimization algorithm [84]. Because there are a lot of online data in the IWSN, when the number of observation sample points of the intelligent network is increasing, the problem of online data-sparse and regularization should be solved [85,86].

3.5.2. Gossip Strategy

In practical IWSN and especially in mobile Ad Hoc networks, because the network topology is dynamically changing, the agent can select a subset of its neighbors for learning [87]. Thus, each agent can be avoided by exchanging information with other agents in its neighborhood without interruption at every moment [88]. Therefore, distributed algorithms may be designed to determine which and how many subsets within the neighborhood are selected and share data with other agents through the selected link. A simple strategy, which is called the gossip algorithm, is to randomly select a neighborhood at each time [89,90].

3.5.3. Asynchronous Strategy

Because the topology of intelligent network is dynamic, there are a lot of uncertain factors, including the arrival time of the random data, the communication fault of the random link, and the communication delay. Therefore, the distributed learning and distributed estimation of intelligent networks cannot achieve full synchronization, so it is necessary to design distributed estimation algorithms under asynchronous strategy [91,92].

3.5.4. Distributed Strategy with Noise

In an intelligent network, the influence of noise is inevitable in the process of information exchange among agents [93]. To establish a mathematical model with noise link, a distributed estimation algorithm with noise can be designed by adding noise component into the iterative algorithm [94,95].

3.5.5. Distributed Kalman Filter

In an intelligent network, the signal is inevitably affected by external interference and equipment internal noise in the transmission process [96]. In order to obtain useful signals and suppress noise, a distributed filtering algorithm needs to be designed [97]. The distributed Kalman filter is a kind of real-time recursive algorithm, which is based on the statistical characteristics of system noise, and observation noise and the systematic observations are used as the input of the filter. The required estimated value (state or parameter of the system) is taken as the output of the filter, which the input and output of the filter are connected by the algorithm of time updating and observation updating [98]. Thus, the useful signals are estimated online according to the state equation and the observation equation of the system [99].

3.5.6. Distributed Bayesian Learning

For the Bayesian learning problem in wireless sensor networks, references [10,13,19] systematically studied how to solve the Bayesian learning problem by using the variational Bayes method in a distributed environment. For the problem of Bayesian inference and estimation on the network, reference [10] proposed a general framework of distributed variational seeing algorithm for conjugate exponential family models. For the joint sparse signal recovery problem in sensor networks, a distributed variational Bayesian algorithm based on quantized communication and inaccurate ADM is proposed in [19]. This algorithm can, not only save traffic, but also achieves fairly good estimation performance and fast convergence speed.

3.6. Example

A mobile IWSN of strongly-connected topology with N = 20 agents is constructed in Figure 6. The ad hoc WSN is generated by the random dynamic network topology with the unity square. Moreover, the mean-square-deviation (MSD) of the stochastic gradient algorithm is defined as the size of the error variance in steady-state mean square value after sufficient iterations [71].
Assuming all agents have uniform step sizes and employ uniform regression covariance matrices R ξ , k = R ξ for k = 1 , 2 , , N and the entries of the target vectors θ * of size M = 10 . The noise variance transmits all agents uniformly with white Gaussian noise ε k = σ ε 2 = 10 - 2 . Furthermore, all agents employ the same step-size μ = 0 . 003 .
According to the averaging rule [69], the combination weights a l k are selected
a l k = 1 n k , l N k 0 , l N k ,
where n k N k is the size of the neighborhood about agent k (or its degree).
Based on [71], by executing Algorithm 2, Algorithm 3, and Algorithm 4, the corresponding distributed learning curves for three cooperative strategies: ATC diffusion, CTA diffusion, and consensus are in Figure 7. Figure 7a is learning curves for cooperative strategy and Figure 7b are the learning curves for any two agents.
Furthermore, in the real-time application and experimental tests, the impacts from wireless network channel refer to how to compensate the interference effectively at the receiver, especially the inter-symbol interference (ISI), in order to reduce the bit error rate (BER) of the system, that is, to equalize the distorted channel in the wireless sensor network effectively [100,101]. Therefore, based on the above-distributed strategies and combined with blind algorithm and non-blind algorithm, the channel estimation and equalization theory of wireless sensor networks will be a very interesting research direction [102].

4. The Main Results of the Distributed Estimation

In [72], the cooperative mechanism of the adaptive increment strategy is investigated and future research directions are also discussed. Based on the affine projection algorithm, in [73], an adaptive increasing learning algorithm is designed and the algorithm implementation process in the intelligent network is also analyzed. In view of the spatially distributed network, in [74], two kinds of distributed estimation algorithms have been designed: incremental least-mean-square (ILMS) algorithm and spatial least-mean-square (SLMS) algorithm, where the advantages and disadvantages of each algorithm performance are also discussed. In [77], an increasing sub-gradient algorithm of limited convex optimization is investigated and the convergence of the algorithm is certified mathematically.
In consensus strategy, each agent is negotiated to bring each expectation state to a common expected value by the network topology. Consensus originates from the field of biology. In the field of computer science, consensus is the theoretical basis for distributed computing and algorithmic implementation. For the wireless sensor network, a consensus-distributed estimation algorithm is designed in [71], which improves the estimation accuracy under the conditions of guarantee convergence. Based on the consensus protocol, a distributed estimation algorithm with connection noise of the ad hoc network is investigated in [66] and [67] respectively. By solving the convex optimization problem, the distributed estimation algorithm is not only improving estimation precision of network signal, but can also restrain the disturbance of noise effectively. A distributed H filtering with consensus strategy is designed in [103], which can effectively suppress the tracking error of network signal through the algorithm iteration. Furthermore, the problem of distributed Kalman filtering is also investigated in [104].
Compared with the other two strategies, in IWSN, diffusion strategy has better convergence, better collected information from agent’s local neighborhoods, stronger robustness about the node and communication link, and it is easier to implement the distributed algorithm through the topology. According to a different construction of distributed estimation error, the diffusion strategy has a different form of algorithm implementation. Based on the mean-square error (MSE) algorithm, the minimum distributed least mean square (DLMS) estimation of the intelligent network is investigated in [105]. Furthermore, the diffusion DLMS algorithm is investigated and the optimality of the algorithm is also analyzed. On this basis, the diffusion LMS algorithm of time-varying parameters is designed in [106] and gives the proof of convergence and optimality. Furthermore, the implementation condition and the solution method of the diffusion LMS algorithm are studied in [107]. The stability and convergence of the algorithm are further analyzed in the paper.
Based on recursive least squares (RLS) algorithm, distributed RLS estimation algorithm for IWSN is investigated in [85], which guarantees global optimization of IWSN. For complex intelligent network systems, the RLS algorithm is more suitable for online estimation. Thus, an RLS algorithm with a local diffusion strategy is studied in [108]. The stability and convergence of the diffusion RLS algorithm is investigated in [109] and applies it to actual verification in ad hoc network with noisy in [86]. In the field of Kalman filter, a distributed Kalman filter with diffusion strategy is designed and the convergence and stability of the algorithm are also investigated in [98]. After comparing the diffusion strategy with the consensus strategy of two kinds of distributed estimation algorithms, reference [110] points out that diffusion strategy is more than the consensus strategy on convergence speed and stability performance.
The IWSN is an adaptive network system, which have the capability of sense, analysis, learning, judging, decision-making, and awareness [111]. In the process of online learning, intelligent network not only obtains the information about the environment in real-time, but also accumulates knowledge and makes decisions [112]. Based on adaptive diffusion strategy, the global optimization cost function is designed through all nodes on the network in [113]. According to agent interaction in the neighborhood, the algorithm successfully implements the IWSN distributed optimization and online learning.
Because there are a lot of online data in IWSN, in the process of distributed learning is easily getting into “curse of dimensionality”. To avoid this, a sparse distributed estimation algorithm is proposed based on diffusion LMS strategy in [83]. The validity of the algorithm is also verified by two different penalty functions. Based on the diffusion strategy, a combined project adapt protocol (CPAP) is designed in [114]. Using a robust statistics loss function, the CPAP, not only realizes the distributed estimation of the intelligent network, but also analyzes the robustness of intelligent node connection failure. In the implementation of distributed estimation, it requires analysis and processing a large amount of data in an intelligent network. In selecting data validity, a kind of data dimension reduction is designed to improve the execution efficiency of distributed algorithms in [115]. By the conjugate function and the dual decomposition, the problem of intelligent network learning is transformed into the problem of distributed optimization in [116]. Using the diffusion strategy, the online dictionary learning of IWSN is implemented.
The problems of distributed estimation theory are mainly include distributed algorithm designing, the convergence of the distributed algorithm, the computational complexity, and sparse algorithm of data, etc. [117]. On the basis of comparing the advantages and disadvantages of several kinds of distributed algorithms, the distribution optimization algorithm is reviewed and the prospect of further research direction of distributed optimization in [71]. Furthermore, a kind of distributed gradient algorithm is designed and the convergence rate about the algorithm is studied simultaneously in [118], which guarantee the distributed algorithm converges to the common expectation value on the basis of the total cost function of IWSN, which is equivalent to the sum of all the nodes’ cost functions.
For the problem of network utility maximization, the distributed Newton optimization algorithm of the intelligent network is designed using the matrix splitting method in [119] and in [120]. The realization process and convergence character of the distribution are also investigated in the two papers. Based on game theory, diffusion LMS algorithm with the ability of learning and self-optimization of intelligent network is investigated in [121]. At the same time, a framework of adaptive game learning theory is proposed and the distributed estimation convergence and stability are also analyzed. Based on graphical evolutionary game, under the regarding each intelligent point as each player, not only the data diffusion process is investigated, but also strategies of data evolution and data development are also analyzed in [122]. Furthermore, based on the diffusion strategy, the distributed Pareto optimization problem is also studied and multi-objective optimization of IWSN is realized in [123].
In addition, the actual IWSN is mixed with numerous networks among components. Under this case, the internal clock of each intelligent node is unable to complete synchronization. Therefore, each intelligent node cannot guarantee status updates at the same time. There are a lot of negative effects in asynchrony of an intelligent network, such as induced delay and switching topologies. Therefore, there is a need for further research asynchronous distributed estimation and optimization about the intelligent networks. The asynchronous learning model and stability condition of the intelligent network is proposed in [124]. On this basis, the performance about asynchronous mathematical model is also analyzed in [125]. Finally, after a comprehensive comparison of the performance of the synchronous and asynchronous algorithm, the advantages and disadvantages are pointed out respectively in [126]. Based on event-driven theory, a kind of distributed optimization strategy is analyzed in [127]. In the process of executing the distributed optimization algorithm, each agent can optimize the common goal’s function by cooperation strategy. When the network wireless communication energy is limited, the distributed algorithm not only ensures convergence of the optimization process, but also extends the communication life of IWSN [128].

5. Conclusions and Future Perspective

Combined characteristics of actual IWSN with development demanding of a distributed algorithm, this paper review the latest research achievement of distributed estimation and distributed learning in recent years. It is of very important practical significance that the nature of the distributed algorithm is intensively studied and understood. Furthermore, the main purpose of this paper is to further promote distributed estimation and dynamic optimization technology in the practical engineering application.
Although existing research shows that the distributed estimation of IWSN has been well developed with linear estimation, such as the LMS algorithm and the RLS algorithm, there are some problems of IWSN to investigate intensively, which include implementation process of online nonlinear estimation, the computational complexity and “curse of dimensionality” of distributed estimation algorithm, and the impact of the distributed estimation algorithm by the dynamic change of network topology. Therefore, these problems can be summarized the following three basic scientific questions about the intelligent network: (1) How to realize distributed adaptive mechanism through IWSN; (2) how to carry out a distributed estimation algorithm through IWSN; (3) how to achieve distributed optimization approach through IWSN.
In the future research, the distributed estimation and dynamic optimization should be focused on the following direction.
(1) Because the intelligent network system is a distributed self-organizing system, the online estimation and data updating are realized among agents by the network topology. How to realize distributed online estimation through network topology is one of the important research directions under the non-cooperation strategy.
(2) There are a lot of online data in an intelligent network, but how to manage these data with online sparse algorithm, to reduce the computational complexity, and to realize the distributed optimization are difficulties in the application of IWSN.
(3) Based on statistical learning theory and online kernel learning, it is very important to process the nonlinear and uncertainty of the network and to accomplish the online distributed kernel adaptive estimation algorithm of IWSN.
(4) Based on the theory of Markov differential game, the distributed robust optimization algorithm of IWSN is established, which not only can avoid solving the Nash equilibrium directly, but also can achieve the global optimal of the whole network system.
In view of the wide application of wireless sensors in IWSN, the limited network resources are restricted by many factors, which include power supply, data analysis, computational complexity, and communication bandwidth. Therefore, by integrated machine learning, distributed algorithm, control theory, deep reinforcement learning, parallel computation, online sparse algorithm, and dynamic optimization theory with engineering application, the distributed estimation and distributed learning have been new areas of scientific research of an intelligent network, which should be intensive research urgently in the big data environment.

Author Contributions

All research work of the article is done by F.T. The author has read and agreed to the published version of the manuscript.

Funding

This work is supported by National Natural Science Foundation of China (NSFC) under Grant number 61673117.

Conflicts of Interest

The author declare no conflict of interest.

References

  1. Xue, L.; Wang, J.L.; Li, J.; Wang, Y.L.; Guan, X. Precoding design for energy efficiency maximization in MIMO half-duplex wireless sensor networks with SWIPT. Sensors 2019, 19, 923. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Chen, J.; Richard, C.; Sayed, A.H. Multitask diffusion adaptation over networks with common latent representations. IEEE J. Sel. Top. Signal Process. 2017, 11, 563–579. [Google Scholar] [CrossRef]
  3. Hao, Q.; Hu, F. Intelligent Sensor Networks the Integration of Sensor Networks, Signal Processing, and Machine Learning; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  4. Alkaya, A.; Grimble, M.J. Experimental application of nonlinear minimum variance estimation for fault detection systems. Int. J. Syst. Sci. 2016, 47, 3055–3063. [Google Scholar] [CrossRef]
  5. Wang, J.; Yang, D.; Jiang, W.; Zhou, J. Semisupervised incremental support vector machine learning based on neighborhood kernel estimation. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2677–2687. [Google Scholar] [CrossRef]
  6. Ribeiro, A.; Schizas, I.D.; Roumeliotis, S.I.; Giannakis, G. Kalman filtering in wireless sensor networks. IEEE Control Syst. 2010, 30, 66–68. [Google Scholar]
  7. Mu, C.X.; Ni, Z.; Sun, C.Y.; He, H.B. Data-driven tracking control with adaptive dynamic programming for a class of continuous-time nonlinear systems. IEEE Trans. Cybern. 2017, 47, 1460–1470. [Google Scholar] [CrossRef]
  8. Ying, B.; Sayed, A.H. Performance limits of stochastic sub–gradient learning, Part I: Single-agent case. Signal Process. 2018, 144, 271–282. [Google Scholar] [CrossRef] [Green Version]
  9. Ying, B.; Sayed, A.H. Performance limits of stochastic sub–gradient learning, Part II: Multi-agent case. Signal Process. 2018, 144, 253–264. [Google Scholar] [CrossRef] [Green Version]
  10. Hua, J.H.; Li, C.G. Distributed variational bayesian algorithms over sensor networks. IEEE Trans. Signal Process. 2016, 64, 783–789. [Google Scholar] [CrossRef]
  11. Chen, Y.; Lü, J.H.; Yu, X.G.; Hill, D.J. Multi-agent systems with dynamical topologies: consensus and applications. IEEE Circuits Syst. Mag. 2013, 13, 21–34. [Google Scholar] [CrossRef] [Green Version]
  12. Zhu, S.; Chen, C.; Ma, X.; Yang, B.; Guan, X. Consensus basedestimation over relay assisted sensor networks for situation monitoring. IEEE J. Sel. Top. Signal Process. 2015, 9, 278–291. [Google Scholar] [CrossRef]
  13. Hua, J.H.; Li, C.G.; Shen, H.L. Distributed learning of predictive structures from multiple tasks over networks. IEEE Trans. Ind. Electron. 2017, 64, 4246–4256. [Google Scholar] [CrossRef]
  14. Grimble, M. Nonlinear minimum variance estimation for statedependent discrete-time systems. IET Signal Process. 2012, 6, 379–391. [Google Scholar] [CrossRef]
  15. Li, X.; Shi, Q.; Xiao, S.; Duan, S.; Chen, F. A robust diffusion minimum kernel risk-sensitive loss algorithm over multitask sensor networks. Sensors 2019, 19, 2339. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Slavakis, K.; Giannakis, G.; Mateos, G. Modeling and optimization for big data analytics: (Statistical) learning tools for our era of data deluge. IEEE Signal Process. Mag. 2014, 31, 18–31. [Google Scholar] [CrossRef]
  17. Wei, Q.; Liu, D.; Lewis, F.L. Optimal distributed synchronization control for continuous-time heterogeneous multiagent differential graphical games. Inf. Sci. 2015, 26, 96–113. [Google Scholar]
  18. Li, C.; Huang, S.; Liu, Y. Distributed TLS over multitask networks with adaptive intertask cooperation. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 3036–3052. [Google Scholar] [CrossRef]
  19. Hua, J.H.; Li, C. Distributed jointly sparse Bayesian learning with quantized communication. IEEE Trans. Signal Inf. Process. Netw. 2018, 4, 769–782. [Google Scholar] [CrossRef]
  20. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
  21. LeCun, Y.; Bengio, Y.; Hinton, G.E. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  22. Yang, B.; Xi, J.X.; Yang, J.; Xue, L. An alignment method for strapdown inertial navigation systems assisted by doppler radar on a vehicle-borne moving base. Sensors 2019, 19, 4577. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Hastie, T. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed.; Springer: Berlin, Germany, 2016. [Google Scholar]
  24. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef] [PubMed]
  25. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level Control through Deep Reinforcement Learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef] [PubMed]
  26. Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al. Mastering the game of Go without human knowledge. Nature 2017, 550, 354–359. [Google Scholar] [CrossRef] [PubMed]
  27. Kelleher, J.D.; Namee, B.M.; D’Arcy, A. Foundations of Machine Learning, 2nd ed.; The MIT Press: New York, NY, USA, 2018. [Google Scholar]
  28. Li, Z.; Wang, Y.; Zheng, W. Adaptive consensus-based unscented information filter for tracking target with maneuver and colored noise. Sensors 2019, 19, 3069. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: New York, NY, USA, 2016. [Google Scholar]
  30. Sun, Y.L.; Yuan, Y.; Xu, Q.; Hua, C.; Guan, X. A mobile anchor node assisted RSSI localization scheme in underwater wireless sensor networks. Sensors 2019, 19, 4369. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Liu, D.; Wei, Q.; Wang, D.; Yang, X.; Li, H. Adaptive Dynamic Programming with Applications in Optimal Control; Springer: New York, NY, USA, 2017. [Google Scholar]
  32. Lewis, F.L.; Liu, D. Reinforcement Learning and Approximate Dynamic Programming for Feedback Control; Wiley-IEEE Press: New York, NY, USA, 2012. [Google Scholar]
  33. Zhang, H.; Liu, D. Adaptive Dynamic Programming for Control: Algorithms and Stability; Springer: New York, NY, USA, 2012. [Google Scholar]
  34. Wei, Q.; Song, R.; Yan, P. Data-driven zero-sum neuro-optimal control for a class of continuous-time unknown nonlinear systems with disturbance using ADP. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 444–458. [Google Scholar] [CrossRef]
  35. Mu, C.; Tang, Y.; He, H.B. Improved sliding mode design for load frequency control of power system integrated an adaptive learning strategy. IEEE Trans. Ind. Electron. 2017, 64, 6742–6751. [Google Scholar] [CrossRef]
  36. Song, R.; Lewis, F.L.; Wei, Q. Off-policy integral reinforcement learning method to solve nonlinear continuous-time multiplayer nonzero-sum games. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 704–713. [Google Scholar] [CrossRef]
  37. Wei, Q.; Liu, D.; Lin, H. Value iteration adaptive dynamic programming for optimal control of discrete-time nonlinear systems. IEEE Trans. Cybern. 2016, 46, 840–853. [Google Scholar] [CrossRef]
  38. Mu, C.; Wang, D.; He, H.B. Novel iterative neural dynamic programming for data-based approximate optimal control design. Automatica 2017, 81, 240–252. [Google Scholar] [CrossRef] [Green Version]
  39. Yang, S.; Liu, Q.; Wang, J. Distributed optimization based on a multiagent system in the presence of communication delays. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 717–728. [Google Scholar] [CrossRef]
  40. Yuan, G.M.; Yuan, W.Z.; Xue, L.; Xie, J.B.; Chang, H.L. Dynamic performance comparison of two Kalman filters for rate signal direct modeling and differencing modeling for combining a MEMS gyroscope array to improve accuracy. Sensors 2015, 15, 27590. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Schizas, I.D.; Ribeiro, A.; Giannakis, G.B. Consensus in Ad Hoc WSNs with noisy links–Part I: distributed estimation of deterministic signals. IEEE Trans. Signal Process. 2008, 56, 350–364. [Google Scholar] [CrossRef]
  42. Yuan, K.; Ying, B.; Zhao, X.C.; Sayed, A.H. Exact diffusion for distributed optimization and learning - Part I: algorithm development. IEEE Trans. Signal Process. 2019, 67, 708–723. [Google Scholar] [CrossRef] [Green Version]
  43. Schizas, I.D.; Ribeiro, A.; Giannakis, G.B. Consensus in Ad Hoc WSNs with noisy links–Part II: distributed estimation and smoothing of random signals. IEEE Trans. Signal Process. 2008, 56, 1650–1666. [Google Scholar] [CrossRef]
  44. Yuan, K.; Ying, B.; Zhao, X.C.; Sayed, A.H. Exact diffusion for distributed optimization and learning - Part II: convergence analysis. IEEE Trans. Signal Process. 2019, 67, 724–739. [Google Scholar] [CrossRef] [Green Version]
  45. Luo, B.; Wu, H.N.; Li, H.X. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 684–696. [Google Scholar]
  46. Ierardi, C.; Orihuela, L.; Jurado, I. Distributed estimation techniques for cyber-physical systems: a systematic review. Sensors 2019, 19, 4270. [Google Scholar] [CrossRef] [Green Version]
  47. Wei, Q.; Shi, G.; Song, R.Z.; Liu, Y. Adaptive dynamic programming-based optimal control scheme for energy storage systems with solar renewable energy. IEEE Trans. Ind. Electron. 2017, 64, 5468–5478. [Google Scholar] [CrossRef]
  48. Dehghannasiri, R.; Esfahani, M.S.; Dougherty, E.R. Intrinsically bayesian robust Kalman filter: an innovation process approach. IEEE Trans. Signal Process. 2017, 65, 783–798. [Google Scholar] [CrossRef]
  49. Liu, J.; Liu, Y.; Dong, K.; Ding, Z.; He, Y. A novel distributed state estimation algorithm with consensus strategy. Sensors 2019, 19, 2134. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Nassif, R.; Vlaski, S.; Richard, C.; Sayed, A.H. A regularization framework for learning over multitask graphs. IEEE Signal Process. Lett. 2019, 26, 297–301. [Google Scholar] [CrossRef]
  51. Cheng, L.; Hou, Z.; Tan, M.; Wang, X. Necessary and sufficient conditions for consensus of double-integrator multi-agent systems with measurement noises. IEEE Trans. Autom. Contr. 2011, 56, 1958–1963. [Google Scholar] [CrossRef]
  52. Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef] [Green Version]
  53. Sayed, A.H.; Lopes, C.G. Adaptive processing over distributed networks. IEICE Trans. Fundam. 2007, E90-A, 1504–1510. [Google Scholar] [CrossRef]
  54. Chen, B.; Zhang, W.N.; Yu, L. Distributed finite-horizon fusion Kalman filtering for bandwidth and energy constrained wireless sensor networks. IEEE Trans. Signal Process. 2014, 62, 797–812. [Google Scholar] [CrossRef]
  55. Thomopoulos, S.C.A.; Viswanathan, R.; Bougoulias, D.K. Optimal distributed decision fusion. IEEE Trans. Aerosp Electron Syst. 2002, 25, 761–765. [Google Scholar] [CrossRef]
  56. Zhu, Y.; Zhou, J.; Shen, X.; Song, E.; Luo, Y. Networked Multisensor Decision and Estimation Fusion: Based on Advanced Mathematical Methods; CRC Press: New York, NY, USA, 2012. [Google Scholar]
  57. Haykin, S. Adaptive Filter Theory; Prentice Hall: New York, NY, USA, 2002. [Google Scholar]
  58. Sayed, A.H. Adaptive Filters; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
  59. Couzin, I.D. Collective cognition in animal groups. Trends Cogn. Sci. 2009, 13, 36–43. [Google Scholar] [CrossRef]
  60. Sayed, A.H.; Tu, S.Y.; Chen, J.; Zhao, X. Diffusion strategies for adaptation and learning over networks: an examination of distributed strategies and network behavior. IEEE Signal Proces. Mag. 2013, 30, 155–171. [Google Scholar] [CrossRef]
  61. Godsil, C.; Royle, G.F. Algebraic Graph Theory (Graduate Texts in Mathematics); Springer: Berlin, Germany, 2001. [Google Scholar]
  62. Lin, Z.Y.; Wang, L.; Han, Z.; Fu, M. Distributed formation control of multi-agent systems using complex Laplacian. IEEE Trans. Autom. Contr. 2014, 59, 1765–1777. [Google Scholar] [CrossRef]
  63. Trudeau, R.J. Introduction to Graph Theory (Dover Books on Mathematics), 2nd ed.; Dover Publications: New York, NY, USA, 1994. [Google Scholar]
  64. Reinhard Diestel. Graph Theory, 5th ed.; Springer: New York, NY, USA, 2017. [Google Scholar]
  65. Sayed, A.H. Adaptive networks. Proc. IEEE 2014, 102, 460–497. [Google Scholar] [CrossRef]
  66. Tu, S.Y.; Sayed, A.H. Mobile adaptive network. IEEE J. Sel. Top. Signal Process. 2011, 5, 649–664. [Google Scholar]
  67. Alghunaim, S.A.; Sayed, A.H. Distributed coupled multiagent stochastic optimization. IEEE Trans. Autom. Contr. 2020, 65, 175–190. [Google Scholar] [CrossRef] [Green Version]
  68. Khawatmi, S.; Zoubir, A.M.; Sayed, A.H. Decentralized decision-making over multi–task networks. Signal Process. 2019, 160, 229–236. [Google Scholar] [CrossRef] [Green Version]
  69. Sayed, A.H. Diffusion adaptation over networks. Acad. Press Libr. Signal Process. 2014, 3, 322–454. [Google Scholar]
  70. Zhang, Y.; Wang, C.; Zhao, L.; Chambers, J.A. A spatial diffusion strategy for Tap-length estimation over adaptive networks. IEEE Trans. Signal Process. 2015, 63, 4487–4501. [Google Scholar] [CrossRef]
  71. Sayed, A.H. Adaptation, learning, and optimization over networks. Found. Trends Mach. Learn. 2014, 7, 311–801. [Google Scholar] [CrossRef] [Green Version]
  72. Lopes, C.G.; Sayed, A.H. Incremental adaptive strategies over distributed networks. IEEE Trans. Signal Process. 2007, 55, 4064–4077. [Google Scholar] [CrossRef]
  73. Li, L.; Lopes, C.G.; Chambers, J.; Sayed, A.H. Distributed estimation over an adaptive incremental network based on the affine projection algorithm. IEEE Trans. Signal Process. 2010, 58, 151–164. [Google Scholar] [CrossRef]
  74. Cattivelli, F.; Sayed, A.H. Analysis of spatial and incremental LMS processing for distributed estimation. IEEE Trans. Signal Process. 2011, 59, 1465–1480. [Google Scholar] [CrossRef]
  75. Yin, F.; Fritsche, C.; Jin, D.; Gustafsson, F.; Zoubir, A.M. Cooperative localization in WSNs using Gaussian mixture modeling: distributed ECM algorithms. IEEE Trans. Signal Process. 2015, 63, 1448–1463. [Google Scholar] [CrossRef] [Green Version]
  76. Nedic, A.; Bertsekas, D.P. Incremental subgradient methods for non-differentiable optimization. SIAM J. Opt. 2001, 12, 109–138. [Google Scholar] [CrossRef]
  77. Helou, E.S.; De Pierro, A.R. Incremental subgradients for constrained convex optimization: A unified framework and new methods. SIAM J. Opt. 2009, 20, 1547–1572. [Google Scholar]
  78. Koppel, A.; Sadler, B.M.; Ribeiro, A. Proximity without consensus in online multiagent optimization. IEEE Trans. Signal Process. 2017, 65, 3062–3077. [Google Scholar] [CrossRef]
  79. Barbarossa, S.; Scutari, G. Bio-inspired sensor network design. IEEE Signal Process. Mag. 2007, 24, 26–35. [Google Scholar] [CrossRef]
  80. Cao, X.; Ray Liu, K.J. Decentralized sparse multitask RLS over networks. IEEE Trans. Signal Process. 2017, 65, 6217–6232. [Google Scholar] [CrossRef] [Green Version]
  81. Tan, F.; Guan, X.; Liu, D. Consensus protocol for multi-agent continuous systems. Chin. Phys. B 2008, 17, 3531–3535. [Google Scholar]
  82. Han, F.; Dong, H.; Wang, Z.; Li, G. Local design of distributed H–consensus filtering over sensor networks under multiplicative noises and deception attacks. Int. J. Robust Nonlin. 2019, 29, 2296–2314. [Google Scholar] [CrossRef]
  83. Lorenzo, P.D.; Sayed, A.H. Sparse distributed learning based on diffusion adaptation. IEEE Trans. Signal Process. 2013, 61, 1419–1433. [Google Scholar] [CrossRef] [Green Version]
  84. Chouvardas, S.; Slavakis, K.; Kopsinis, Y.; Theodoridis, S. A sparsity-promoting adaptive algorithm for distributed learning. IEEE Trans. Signal Process. 2012, 60, 5412–5425. [Google Scholar] [CrossRef] [Green Version]
  85. Cattivelli, F.S.; Lopes, C.G.; Sayed, A.H. Diffusion recursive least-squares for distributed estimation over adaptive networks. IEEE Trans. Signal Process. 2008, 56, 1865–1877. [Google Scholar] [CrossRef]
  86. Bertrand, A.; Moonen, M.; Sayed, A.H. Diffusion bias-compensated RLS estimation over adaptive networks. IEEE Trans. Signal Process. 2011, 59, 5212–5224. [Google Scholar] [CrossRef] [Green Version]
  87. Kar, S.; Moura, J.M.F. Convergence rate analysis of distributed gossip (linear parameter) estimation: Fundamental limits and tradeoffs. IEEE J. Sel. Top. Signal Process. 2011, 5, 674–690. [Google Scholar] [CrossRef]
  88. Dimakis, A.G.; Kar, S.; Moura, J.M.F.; Rabbat, M.G.; Scaglione, A. Gossip algorithms for distributed signal processing. Proc. IEEE 2010, 98, 1847–1864. [Google Scholar] [CrossRef] [Green Version]
  89. Aysal, T.C.; Yildiz, M.E.; Sarwate, A.D.; Scaglione, A. Broadcast gossip algorithms for consensus. IEEE Trans. Signal Process. 2009, 57, 2748–2761. [Google Scholar] [CrossRef]
  90. Boyd, S.; Ghosh, A.; Prabhakar, B.; Shah, D. Randomized gossip algorithms. IEEE Trans. Inf. Theory 2006, 52, 2508–2530. [Google Scholar] [CrossRef] [Green Version]
  91. Tsitsiklis, J.; Bertsekas, D.; Athans, M. Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Autom. Contr. 1986, 31, 803–812. [Google Scholar] [CrossRef] [Green Version]
  92. Srivastava, K.; Nedic, A. Distributed asynchronous constrained stochastic optimization. IEEE J. Sel. Top. Signal Process. 2011, 5, 772–790. [Google Scholar] [CrossRef]
  93. Kar, S.; Moura, J.M.F. Distributed consensus algorithms in sensor networks: Link failures and channel noise. IEEE Trans. Signal Process. 2009, 57, 355–369. [Google Scholar] [CrossRef] [Green Version]
  94. Zhao, X.; Tu, S.Y.; Sayed, A.H. Diffusion adaptation over networks under imperfect information exchange and non-stationary data. IEEE Trans. Signal Process. 2012, 60, 3460–3475. [Google Scholar] [CrossRef] [Green Version]
  95. Khalili, A.; Tinati, M.A.; Rastegarnia, A.; Chambers, J.A. Steady state analysis of diffusion LMS adaptive networks with noisy links. IEEE Trans. Signal Process. 2012, 60, 974–979. [Google Scholar] [CrossRef]
  96. Olfati-Saber, R. Kalman-consensus filter: optimality, stability, and performance. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, Shanghai, China, 15–18 December 2009; pp. 7036–7042. [Google Scholar]
  97. Khan, U.A.; Moura, J.M.F. Distributing the Kalman filter for large-scale systems. IEEE Trans. Signal Process. 2008, 56, 4919–4935. [Google Scholar] [CrossRef] [Green Version]
  98. Cattivelli, F.; Sayed, A.H. Diffusion strategies for distributed Kalman filtering and smoothing. IEEE Trans. Autom. Contr. 2010, 55, 2069–2084. [Google Scholar] [CrossRef]
  99. Khan, U.A.; Jadbabaie, A. On the stability and optimality of distributed Kalman filters with finite-time data fusion. In Proceedings of the 2011 American Control Conference, San Francisco, CA, USA, 29 June–1 July 2011; pp. 3405–3410. [Google Scholar]
  100. Chao, C.M.; Wang, Y.Z.; Lu, M.W. Multiple-rendezvous multichannel MAC protocol design for underwater sensor networks. IEEE Trans. Syst. Man. Cybern. Syst. 2013, 43, 128–138. [Google Scholar] [CrossRef]
  101. Ma, J.; Ye, M.; Zheng, Y.; Zhu, Y. Consensus analysis of hybrid multiagent systems: a game–theoretic approach. Int. J. Robust Nonlin. 2019, 29, 1840–1853. [Google Scholar] [CrossRef]
  102. Segarra, S.; Mateos, G.; Marques, A.G.; Ribeiro, A. Blind identification of graph filters. IEEE Trans. Signal Process. 2016, 65, 1146–1159. [Google Scholar] [CrossRef]
  103. Wan, Y.; Wei, D.; Hao, Y. Distributed filtering with consensus strategies in sensor networks: considering consensus tracking error. Acta Autom. Sin. 2012, 38, 1211–1217. [Google Scholar] [CrossRef]
  104. Carli, R.; Chiuso, A.; Schenato, L.; Zampieri, S. Distributed Kalman filtering based on consensus strategies. IEEE J. Sel. Areas Commun. 2008, 26, 622–633. [Google Scholar] [CrossRef] [Green Version]
  105. Lopes, C.G.; Sayed, A.H. Diffusion least-mean squares over adaptive networks: formulation and performance analysis. IEEE Trans. Signal Process. 2008, 56, 3122–3136. [Google Scholar] [CrossRef]
  106. Zhao, X.; Sayed, A.H. Performance limits for distributed estimation over LMS adaptive networks. IEEE Trans. Signal Process. 2012, 60, 5107–5124. [Google Scholar] [CrossRef] [Green Version]
  107. Cattivelli, F.S.; Sayed, A.H. Diffusion LMS strategies for distributed estimation. IEEE Trans. Signal Process. 2010, 58, 1035–1048. [Google Scholar] [CrossRef]
  108. Arablouei, R.; Dogancay, K.; Werner, S.; Huang, Y.F. Adaptive distributed estimation based on recursive least-squares and partial diffusion. IEEE Trans. Signal Process. 2014, 62, 3510–3522. [Google Scholar] [CrossRef]
  109. Mateos, G.; Giannakis, G.B. Distributed recursive least-squares: stability and performance analysis. IEEE Trans. Signal Process. 2012, 60, 3740–3754. [Google Scholar] [CrossRef] [Green Version]
  110. Tu, S.Y.; Sayed, A.H. Diffusion strategies outperform consensus strategies for distributed estimation over adaptive networks. IEEE Trans. Signal Process. 2012, 60, 6217–6234. [Google Scholar] [CrossRef] [Green Version]
  111. Predd, J.B.; Kulkarni, S.R.; Poor, H.V. Distributed learning in wireless sensor networks. IEEE Signal Process Mag. 2006, 23, 56–69. [Google Scholar] [CrossRef] [Green Version]
  112. Chen, J.; Sayed, A.H. Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Trans. Signal Process. 2012, 60, 4289–4305. [Google Scholar] [CrossRef] [Green Version]
  113. Sayed, A.H.; Tu, S.Y.; Chen, J.S. Online learning and adaptation over networks: more information is not necessarily better. In Proceedings of the 2013 Information Theory and Applications Workshop (ITA), San Diego, CA, 10–15 February 2013; pp. 1–8. [Google Scholar]
  114. Chouvardas, S.; Slavakis, K.; Theodoridis, S. Adaptive robust distributed learning in diffusion sensor networks. IEEE Trans. Signal Process. 2011, 59, 4692–4707. [Google Scholar] [CrossRef]
  115. Ma, H.; Yang, Y.; Chen, Y.; Ray Liu, K.J.; Wang, Q. Distributed state estimation with dimension reduction preprocessing. IEEE Trans. Signal Process. 2014, 62, 3098–3110. [Google Scholar]
  116. Chen, J.; Towfic, Z.J.; Sayed, A.H. Dictionary learning over distributed models. IEEE Trans. Signal Process. 2015, 63, 1001–1016. [Google Scholar] [CrossRef] [Green Version]
  117. Liu, Y.; Li, C.; Zhang, Z. Diffusion sparse least-mean squares over networks. IEEE Trans. Signal Process. 2012, 60, 4480–4485. [Google Scholar] [CrossRef]
  118. Jakovetic, D.; Xavier, J.; Moura, J.M.F. Fast distributed gradient methods. IEEE Trans. Autom. Contr. 2014, 59, 1131–1146. [Google Scholar] [CrossRef] [Green Version]
  119. Wei, E.; Ozdaglar, A.; Jadbabaie, A. A distributed Newton method for Network Utility Maximization–I: algorithm. IEEE Trans. Autom. Contr. 2013, 58, 2162–2175. [Google Scholar] [CrossRef]
  120. Wei, E.; Ozdaglar, A.; Jadbabaie, A. A distributed Newton method for Network Utility Maximization–II: convergence. IEEE Trans. Autom. Contr. 2013, 58, 2176–2188. [Google Scholar] [CrossRef]
  121. Gharehshiran, O.N.; Krishnamurthy, V.; Yin, G. Distributed energy-aware diffusion least mean squares: game-theoretic learning. IEEE J. Sel. Top. Signal Process. 2013, 7, 821–836. [Google Scholar] [CrossRef]
  122. Jiang, C.; Chen, Y.; Ray Liu, K.J. Distributed adaptive networks: A graphical evolutionary game-theoretic view. IEEE Trans. Signal Process. 2013, 61, 5675–5688. [Google Scholar] [CrossRef]
  123. Chen, J.; Sayed, A.H. Distributed Pareto optimization via diffusion strategies. IEEE J. Sel. Top. Signal Process. 2013, 7, 205–220. [Google Scholar] [CrossRef] [Green Version]
  124. Zhao, X.; Sayed, A.H. Asynchronous adaptation and learning over networks–part I: modeling and stability analysis. IEEE Trans. Signal Process. 2015, 63, 811–826. [Google Scholar] [CrossRef]
  125. Zhao, X.; Sayed, A.H. Asynchronous adaptation and learning over networks-part II: Performance Analysis. IEEE Trans. Signal Process. 2015, 63, 827–842. [Google Scholar] [CrossRef] [Green Version]
  126. Zhao, X.; Sayed, A.H. Asynchronous adaptation and learning over networks-part III: comparison analysis. IEEE Trans. Signal Process. 2015, 63, 843–858. [Google Scholar] [CrossRef] [Green Version]
  127. Zhong, M.; Cassandras, C.G. Asynchronous distributed optimization with event-driven communication. IEEE Trans. Autom. Contr. 2010, 55, 2735–2750. [Google Scholar] [CrossRef]
  128. Edwards, J. Signal processing leads to new wireless technologies [Special reports]. IEEE Signal Process Mag. 2014, 31, 10–14. [Google Scholar] [CrossRef]
Figure 1. The topology of intelligent wireless sensor network (IWSN).
Figure 1. The topology of intelligent wireless sensor network (IWSN).
Sensors 20 01302 g001
Figure 2. The IWSN with the distributed cooperative strategy.
Figure 2. The IWSN with the distributed cooperative strategy.
Sensors 20 01302 g002
Figure 3. The topology of IWSN with N intelligent node.
Figure 3. The topology of IWSN with N intelligent node.
Sensors 20 01302 g003
Figure 4. Two types of distributed estimation with cooperative strategy: incremental strategy and consensus strategy. (a) A cyclic path with increment strategy is the start of node l, in which all intelligent nodes can be visited in Figure 3. (b) The neighbors of intelligent node k with consensus strategy is defined as 1 , 7 , l , k .
Figure 4. Two types of distributed estimation with cooperative strategy: incremental strategy and consensus strategy. (a) A cyclic path with increment strategy is the start of node l, in which all intelligent nodes can be visited in Figure 3. (b) The neighbors of intelligent node k with consensus strategy is defined as 1 , 7 , l , k .
Sensors 20 01302 g004
Figure 5. The two structures of diffusion strategy.
Figure 5. The two structures of diffusion strategy.
Sensors 20 01302 g005
Figure 6. A strongly-connected topology of IWSN with N = 20 agents.
Figure 6. A strongly-connected topology of IWSN with N = 20 agents.
Sensors 20 01302 g006
Figure 7. The learning curves for three cooperative strategies: ATC diffusion, CTA diffusion, and consensus.
Figure 7. The learning curves for three cooperative strategies: ATC diffusion, CTA diffusion, and consensus.
Sensors 20 01302 g007

Share and Cite

MDPI and ACS Style

Tan, F. The Algorithms of Distributed Learning and Distributed Estimation about Intelligent Wireless Sensor Network. Sensors 2020, 20, 1302. https://doi.org/10.3390/s20051302

AMA Style

Tan F. The Algorithms of Distributed Learning and Distributed Estimation about Intelligent Wireless Sensor Network. Sensors. 2020; 20(5):1302. https://doi.org/10.3390/s20051302

Chicago/Turabian Style

Tan, Fuxiao. 2020. "The Algorithms of Distributed Learning and Distributed Estimation about Intelligent Wireless Sensor Network" Sensors 20, no. 5: 1302. https://doi.org/10.3390/s20051302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop