Next Article in Journal
Analysis of Subharmonic Oscillation and Slope Compensation for a Differential Boost Inverter
Next Article in Special Issue
A Special Issue on Modeling, Dimensioning, and Optimization of 5G Communication Networks, Resources, and Services
Previous Article in Journal
Automatic Tooth Detection and Numbering Using a Combination of a CNN and Heuristic Algorithm
Previous Article in Special Issue
Fronthaul Design for Wireless Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Concensus-Based ALADIN Method to Faster the Decentralized Estimation of Laplacian Spectrum

1
The Faculty of Electrical Engineering, The University of Danang—University of Science and Technology, 54 Nguyen Luong Bang Street, Danang City 550000, Vietnam
2
Manufacturing Execution and Control Group, Singapore Institute of Manufacturing Technology, Singapore 138634, Singapore
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(16), 5625; https://doi.org/10.3390/app10165625
Submission received: 13 June 2020 / Revised: 10 July 2020 / Accepted: 13 July 2020 / Published: 13 August 2020

Abstract

:
With the upcoming fifth Industrial Revolution, humans and collaborative robots will dance together in production. They themselves act as an agent in a connected world, understood as a multi-agent system, in which the Laplacian spectrum plays an important role since it can define the connection of the complex networks as well as depict the robustness. In addition, the Laplacian spectrum can locally check the controllability and observability of a dynamic controlled network, etc. This paper presents a new method, which is based on the Augmented Lagrange based Alternating Direction Inexact Newton (ALADIN) method, to faster the convergence rate of the Laplacian Spectrum Estimation via factorization of the average consensus matrices, that are expressed as Laplacian-based matrices problems. Herein, the non-zero distinct Laplacian eigenvalues are the inverse of the stepsizes { α t , t = 1 , 2 , } of those matrices. Therefore, the problem now is to carry out the agreement on the stepsize values for all agents in the given network while ensuring the factorization of average consensus matrices to be accomplished. Furthermore, in order to obtain the entire Laplacian spectrum, it is necessary to estimate the relevant multiplicities of these distinct eigenvalues. Consequently, a non-convex optimization problem is formed and solved using ALADIN method. The effectiveness of the proposed method is evaluated through the simulation results and the comparison with the Lagrange-based method in advance.

1. Introduction

Leaders around the world obviously prefer the present era of connectivity as the Fourth Industrial Revolution [2]. Industry 4.0 has been significantly contributing to the transformation of many industries such as transportation, manufacturing, health-care, agriculture, etc. via enabling data transmission and integration between disciplines. Today, we are in the fourth one, a generation of connection between our physical, digital, social, and biological worlds. In particular, data and information from these different areas have been made available and have been connected in complex and dense networks. For better integration and utilization, control aspect of these complex networks, researchers from different communities have brought different contributions varying from topology inference to control strategy, deputizing for interacting systems, which are modeled by graphs, whose vertices represent the components of the system while edges stand for the interactions between these components.
In the last decade there has been dramatic increasing number of publications in the cooperative control of multi-agent systems. In control of multi-agent systems, the performance of the whole system depends on both structure and the connections between individuals of the systems. Here, the total connections can be defined by the graph Laplacian matrix, and its spectrum is involved in some useful properties of the network system [3,4]. For instance, the second smallest graph Laplacian eigenvalue, i.e., the so-called algebraic connectivity of the graph, has the main role in the convergence time of various distributed algorithms as well as the performance and robustness of dynamical systems [5]. Conceptually, agents share their information with each other to achieve common objectives, relative position information, or common control algorithms. This is called consensus problem [6,7], where a group of agents’ approaches average consensus in an undirected network under simple linear iteration scheme.
It is well known that in a multi-agent system, consensus is achieved if and only if the network is connected (the algebraic connectivity) being strictly greater than zero [8]. On the other hand, the largest Laplacian eigenvalue is an important factor to decide the stability of the system. For example, minimizing the spectral radius in [9] leads to maximize the robustness of the network to time delays under a linear consensus protocol. Furthermore, to speed up consensus algorithms, the optimal Laplacian-based consensus matrix is obtained with a stepsize which is the inverse of the sum of the smallest and the largest non-zero graph Laplacian eigenvalue [10]. Moreover, the authors in [11,12] have proved that the spectrum of the Laplacian matrix can be used to design consensus matrices to obtain average consensus in finite number of steps.
In order to investigate network efficiency, structural robustness of a network which is related to its performance despite changes in the network topology [13] has been also studied. The concept of natural connectivity as a spectral measure of robustness was introduced in [14]. It is expressed in mathematical form as the average eigenvalue of the adjacency matrix of the graph representing the network topology. The Laplacian spectrum s p ( L ) = { λ 1 m 1 , , λ i , } can also be employed to compute the robustness indices, for instance, the number of spanning trees and the effective graph resistance (Kirchhoff index) [15]. The smaller (or greater) the Kirchhoff index (or the number of spanning trees) is, the more robust the network becomes. In addition, it has been pointed out that adding an edge strictly decreases the Kirchhoff index and hence increases the robustness. In [16], the authors have proposed a method to monitor collaboratively the robustness of the networks partitioned into sub-networks by Kirchhoff index R L = N i = 2 D + 1 m i λ i . Here, an Alternating Direction of Multipliers Method (ADMM)-based algorithm was employed to perform the factorization of the averaging matrix and to compute the average degree of the network concurrently. However, the main point in this work was the reformulation into the convex optimization problem, which is convenient to make use of the ADMM method to solve the problem. In addition to that, the impact of the Laplacian spectrum into power systems is expressed through energy management in smart grids [17] and the determination of the grid robustness against low frequency disturbance in [18]. In this work, in the framework of spectral graph theory, the authors reveal that the decomposition of frequency signal along scaled Laplacian spectrum when the damping-inertia ratios are uniform across buses not only makes the system respond faster but also helps lower the system nadir after a disturbance. In dynamic network systems, the spectrum of Laplacian matrix can also be utilized for locally checking the controllability and the observability [19].
From a short literature survey above, it is obvious that the Laplacian spectrum plays an important role in many fields. For instance, Laplacian spectrum can be used to design consensus matrices [11,12], to compute these robustness indices [15,16,17,18], or to check the controllability and the observability [19]. Hence, it is desirable to have an efficient method for monitoring the Laplacian spectrum of a dynamics network system.
One thing to remark here is that if the global network topology is known in a-priori, the Laplacian matrix can be easily deduced. However, implementing a centralized structure is an expensive task due to the high computational cost, the heavy communication infrastructure aspect and the problem from large dimensionality. Additionally, if there is a failure problem from one point, it will affect the whole network. Therefore, our study is restricted to the assumption that the network topology (represented by the Laplacian Matrix) is unknown at the first glance. A dominant contribution of this paper is the possibility of implementing this monitoring scheme in a decentralized manner.
In this paper, we present an Augmented Lagrangian based Alternating Direction Inexact Newton (ALADIN) method to estimate the Laplacian spectrum in decentralized scheme for dynamic controlled networks. The key feature of this paper is the direct solution to non-convex optimization for Laplacian spectrum estimation using ALADIN method. To simplify, the scope of this study is restricted to networks performing noise-free as well as the number of the agents N in the network in known in-priori by using random walk algorithm [20]. The network is modeled, then Laplacian eigenvalues and average consensus are retrieved respectively. Since the Laplacian spectrum matrix is not directly computable for undetermined network topology, the decentralized estimation of the Laplacian spectrum has been introduced with three main approaches in the recent literature: Fast Fourier Transform (FFT)-based methods [21,22], local eigenvalue decomposition of given observability-based matrices [23], and distributed factorization of the averaging matrix J N = 1 N 11 T [24]. FFT-based methods require a specific protocol. However, they do not make use of the available measurements coming from the consensus protocol. On the other hand, the method in [23] allows using the transient of the average consensus protocol but for several consecutive initial conditions. The distributed factorization of the averaging matrix in [24] yields the inverses of non-zero Laplacian eigenvalues and can be solved as a constrained consensus problem. The Laplacian eigenvalues can be deduced as the inverse of the stepsizes in each estimating factor, where these factors are constrained to be structured as Laplacian based consensus matrices. In [1], authors have applied a gradient descent algorithm to solve this optimization problem in which only local minima was guarantees accompany with slow convergence rate. In order to solve this annoying issue, in [16,24], the authors have introduced an interesting way by reformulating a non-convex optimization problem in [1] into convex one and solved by applying an ADMM-based method. However, this is an indirect approach obtaining by an adequate re-parameterization. In this paper, we inherit the idea in [1] to form the non-convex optimization for decentralized estimation of Laplacian spectrum and then directly solve it using the ALADIN method that was proposed by [25]. The proposed approach is then evaluated with two network structures for performance evaluation in comparison with gradient descent method [1].
In this paper, we firstly introduce the background of average consensus and state the problem in Section 2, then present the distributed estimation of Laplacian spectrum in Section 3. The structure of this section can be illustrated as in Figure 1. Before concluding the paper, the simulation results are described in Section 4 to evaluate the efficiency of the proposed method.

2. Background and Problem Statement

Consider a dynamic network, in which interconnection is represented by G ( V , E ) , an undirected graph with components’ set V and links’ set E, consisting of N = | V | nodes, let us denote by N i = { j V : ( i , j ) E } the set of neighbors of node i and d i = | N i | its degree. Interactions between nodes can be captured by the Laplacian matrix L N with entries l i i = d i , l i j = 1 if j N i and l i j = 0 elsewhere. Denote the Laplacian spectrum by s p ( L ) = { λ 1 m 1 , λ 2 m 2 , , λ D + 1 m D + 1 } , where the different Laplacian eigenvalues are in increasing order 0 = λ 1 < λ 2 < < λ D + 1 and superscripts stand for multiplicities m i = m ( λ i ) , while S 2 = { λ 2 , , λ D + 1 } stands for the set of the non-zero distinct Laplacian eigenvalues.

2.1. Average Consensus

For each node i V , let x i ( t ) denotes the value of node i at timestep t. Define x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x N ( t ) ] T , where N is the number of nodes in the network. Average consensus algorithms can be achieved by using the following linear iteration scheme as follows:
x ( t ) = ( I N α L ) x ( t 1 ) ,
where α is an appropriately selecting stepsize [26], by which all nodes converge asymptotically to the same value x ¯ that is the average of the initial ones x ¯ 1 = lim t x ( t ) = 1 N 11 T x ( 0 ) .
On the other hand, it has been shown in [11,12] that the average consensus matrix can be factored as
t = D 1 W t = 1 N 11 T ,
where W t = ϑ t I N + α t L , ϑ t and α t being parameters to be designed. In [12], the solution was given by ϑ t = 1 and α t = 1 λ t + 1 , λ t being a non-zero Laplacian eigenvalue. Owing to the above factorization, average consensus can then be reached in D steps, D being the number of distinct non-zero Laplacian eigenvalues:
x ¯ = x ( D ) = t = D 1 W t x ( 0 ) = 1 N 11 T x ( 0 )   for   all   x ( 0 ) N .

2.2. Problem Statement

It can be noted that by factorizing the average consensus matrix, while constraining the factor matrices to be in the form I N α t L , the eigenvalues of the Laplacian matrix as the inverse of α t can be deduced. The uniqueness has been proved in [1].
Lemma 1.
[1] Let λ 2 , , λ D + 1 0 be the D distinct non-zero eigenvalues of the graph Laplacian matrix L , then, up to permutation, the sequence { α i } i = 1 , , D , with α i = 1 λ i + 1 , i = 1 , 2 , , D , is the unique sequence allows getting the minimal factorization of the average consensus matrix as 1 N 11 T = i = 1 D ( I N α i L ) .
Therefore, in order to implement the proposed method, the knowledge of the network should be known. Meaning that, the number of the components N of the given network should be known by adding a learning mechanism as a configuration step. Practically, in most systems where communications are involved, learning sequences are used for communication channel identification or for synchronization. In [20], the authors have proposed a method using random walks to estimate the global properties of large connected undirected graphs such as number of vertices, edges, etc. However, it is not in the scope of this paper. Indeed, assuming that the number of agents N is known in a-priori, a consensus protocol in [10] is to be uploaded to each agent to compute the average consensus value x ¯ . The main task in our study is to estimate the whole Laplacian spectrum.

3. Distributed Estimation of Laplacian Spectrum

Given an initial input-output pair { x ( 0 ) , x ¯ } , with x ¯ = 1 N 11 T x ( 0 ) , the matrix factorization problem (3) is equivalent to minimize the cost function E ( W ) = x ( D ) x ¯ 2 that can also be rewritten as follows:
E ( W ) = t = D 1 W t x ( 0 ) x ¯ 2 ,
where D is the number of steps before reaching average consensus and W t = I N α t L .
Note that there is no need for a central node to set the initial input-output pair. Indeed, such a pair can be obtained after running a standard average consensus algorithm. Each node keeps in memory its own initial value and the consensus value.
Solving this factorization problem consists in finding the sequence of stepsize { α t } t = 1 , , D . It is obvious that α t are global parameters. To relax these constraints, define the factor matrices as W t = I N Λ t L , where Λ t = d i a g ( α t ) , α t = [ α t , 1 , α t , 2 , , α t , N ] , t = 1 , 2 , , D . The problem above can be reformulated as a constrained consensus problem, that is to compute the sequence of stepsize { α t } so that α t , 1 = α t , 2 = = α t , N . Moreover, in Section 2.1, D is denoted as number of non-zero distinct Laplacian eigenvalues. However, in this work, Laplacian matrix is assumed to be not-known in-a-priori. Therefore, the authors have assigned D as h = N 1 since N can be estimated in the configuration step through the Random Walk Algorithm proposed in [20]. Furthermore, the Laplacian Spectrum estimation procedure is divided in following stages:
  • Stage 1: Distributed estimation of the set of non-zero Laplacian eigenvalues S 1 = { λ 1 , λ 2 , , λ h } , composing of the set of D non-zero distinct Laplacian eigenvalues S 2 = { λ 1 , λ 2 , , λ D } .
  • Stage 2: Eliminating the wrong eigenvalues in the set S 1 to obtain the set S 2
  • Stage 3: Estimating the multiplicities m corresponding to each eigenvalues in the set S 2 to achieve the whole Laplacian spectrum s p ( L ) .

3.1. Distributed Estimation of Non-Zero Laplacian Eigenvalues

For distributively carrying out the factorization of the average consensus matrix as factors of Laplacian based consensus matrices, the idea is to minimize the disagreement between neighbors on the value of α t while ensuring that the factorization of the average consensus matrix is achieved. Such a factorization is assessed by constraining the values of the nodes after h iterations of the consensus algorithm to be equal to the average of the initial values:
min α t R N × 1 , t = 1 , 2 , , h 1 2 t = 1 h i V j N i ( α t , j α t , i ) 2 subject to x ( h ) = x ¯
or, it can be rewritten in the following form:
min α t R N × 1 1 2 t = 1 h α t T L α t . subject to x ( h ) = x ¯
This optimization has been solved by applying Augmented Lagrange Method [1]. However, the disadvantage of this method is the slow convergence rate due to the fact that it is a non-convex optimization problem. To overcome this unexpected issue, the authors have suggested an interesting variant by converting the non-convex function into the convex one. By that, the optimization can be easily and effectively solved by Alternating Direction of Multipliers Methods (ADMM) [16,27]. In this paper, we proposed a method that can solve a non-convex problem effectively by employing ALADIN method, which is described as following:
(1)
Step 1: ALADIN solves in parallel a sequence of equality-constrained non-linear problems (NLP) by introducing an augmented variables y as follows:
min α t R N × 1 , t = 1 , 2 , , h 1 2 α t T L α t + λ T ( α t y t ) + ρ 2 ( α t y t ) Σ t 2 subject to x ( h ) x ¯ = 0 | β y t , i = y t , j i = 1 , , N ; j N i ( C )
where ρ , β are penalty parameter and multiplier of the equality constraint, respectively. C = { y t : y t , i = y t , j , i = 1 , , N ; j N i } .
One thing to note here is that with the given initial information x i ( 0 ) , i = 1 , 2 , , N , running a standard consensus algorithm can determine the average value x ¯ = 1 N i = 1 N x i ( 0 ) . In addition to that, the positive semi-definite scaling matrices Σ t can be randomly initialized or even be an identity matrix I N . In this optimization problem (7), augmented variables y t are introduced with respect to the constraint C . However, this NLP is to be solved to define the variables α t , hence, the constraint C is going to be relaxed.
The solution α t [ k + 1 ] , β [ k + 1 ] obtained from (7) is then used to check the stopping criteria the the next steps of the ALADIN-based algorithm procedure with k being an iteration of the optimization process. Herein, if 1 2 t = 1 h i V j N i ( α t , j α t , i ) 2 < ϵ and ρ Σ t ( α t y t ) 1 ϵ , then one can get α t * as well as the Algorithm stops.
(2)
Step 2: The Gradients, Jacobian matrices, and Hessian matrices are estimated for the next quadratic programming (QP) subproblems the as follows:
g t = α t { 1 2 α t T L α t + β t T ( x ( h ) x ¯ ) }
C t = α t ( x ( h ) x ¯ ) T
B t = 2 α t 2 ( 1 2 α t T L α t ) .
(3)
Step 3: Analogically to inexact SQP method, the QP problem is solved to find the Δ α t [ k ] and the affine multiplier λ Q P [ k ] as follows:
min Δ α R N × h , s R N × 1 , t = 1 , 2 , , h t = 1 h { 1 2 Δ α t T B t Δ α t + g t T Δ α t } + λ T s + μ 2 s 2 subject to t = 1 h ( α t + Δ α t y t ) = s | λ Q P C t Δ α t = 0 , t = 1 , 2 , , h | η
where s is the slack variable, introduced into the QP sub-problem to attenuate the numerical reasons when the penalty parameter μ becomes large.
(4)
Step 4: The final step is to update λ [ k + 1 ] , y t [ k + 1 ] , B t [ k + 1 ] :
λ [ k + 1 ] = λ Q P [ k ]
y ^ t [ k ] = α t [ k ] + Δ α t [ k ]
This update rule is relevant to the full-size step where a 1 = a 2 = a 3 = 1 for the steplength computation, which proposed in [25]. Then, projecting y ^ [ k ] on the constraint C to derive y [ k + 1 ] .
The steps are repeated until the stopping criteria is satisfied.
In order to derive the distributed algorithm, let us take a closer look at each step of the proposed ALADIN-based method.

3.1.1. Implementation of Decoupled Nonlinear Problems

Firstly, in order to solve the decoupled NLP (7) for t = 1 , , h , the Augmented Lagrange method is applied here. Hence, we introduce the Augmented Lagrange function with the Lagrange multiplier β and penalty parameter c as below:
H 1 , t = 1 2 α t T L α t + λ T ( α t y t ) + ρ 2 ( α t y t ) Σ t 2 + β T ( x ( h ) x ¯ ) + c 2 x ( h ) x ¯ 2 2
The solution of this problem can be obtained by applying a gradient descent method iteratively:
α t [ k + 1 ] = α t [ k ] b H 1 , t α t
β [ k + 1 ] = β [ k ] + c ( x ( h ) x ¯ )
where b stands for stepsizes of the gradient descent method, which can be chosen by fix constants or be determined by deploying a line-search algorithms such as Wolfe Condition, Back-stracking, or Armijo ones in [28], while c dedicates for the penalty parameter of the Augmented Lagrange method.
Lemma 2.
The derivatives of this Lagrange function (14) is obtained as follows:
H 1 , t α t = L α t + λ + ρ Σ t ( α t y t ) d i a g 1 ( α t ) d i a g ( x t 1 x t ) δ t d i a g 1 ( α t ) d i a g ( x t 1 x t ) e t
where δ h = β , and δ t = W t + 1 δ t + 1 , while e h = x ( h ) x ¯ and e t = W t + 1 e t + 1 .
The proof is showed in the Appendix A.

3.1.2. Implementation of the Coupling Quadratic Programming (QP)

Now, we apply the Karush–Kuhn–Tucker (KKT) conditions for solving the quadratic programming (QP) (11). The Augmented Lagrange Function is described as follows:
H 2 = t = 1 h 1 2 Δ α t T B t Δ α t + g t T Δ α t + λ Q P T Δ α t + ( λ T λ Q P T ) s + μ 2 s 2 + λ Q P T t = 1 h ( α t y t ) + t = 1 h η t T C t Δ α t
The KKT conditions, which are showed in the Appendix B yields a system of equations as follows:
{ B t Δ α t λ Q P + C t T η t = g t , t = 1 , 2 , , h t = 1 h Δ α t 1 μ λ Q P = 1 μ λ t = 1 h ( α t y t ) C t Δ α t = 0
Solutions of this system of equations are Δ α t * , λ Q P * , η t * respectively in the equivalent matrix form as follows:   
B 1 0 0 I C 1 T 0 0 0 B 2 0 I 0 C 2 T 0 0 0 B h I 0 0 C h T I I I 1 μ I 0 0 0 C 1 0 0 0 0 0 0 0 C 2 0 0 0 0 0 0 0 C h 0 0 0 0 Δ α 1 * Δ α 2 * Δ α h * λ Q P * η 1 * η 2 * η h * = g 1 g 2 g h λ μ t = 1 h ( α t y t ) 0 0 0
This linear system can be solved by any linear solver. One thing to note here is the adaptive parameter μ . We can start with a quite small μ and adapt it during the optimization progress to relax the coupling conditions.

3.1.3. Implementation of an ADMM-Based Algorithm

Now, the update steps (12) and (13) are executed. Since the achieved y ^ t , i have to agree with the constraint ( C ), we solve the following optimization problem:
min y i h × 1 1 2 i = 1 N y i y ^ i 2 subject to y j = y i i = 1 , , N ; j N i ( C )
To solve this optimization problem, an Alternating Direction Method of Multipliers (ADMM) in [16,27] can be employed by introducing an augmented parameters z i j . Hence, the optimization can be rewritten as follows:
min y t 1 2 i = 1 N y i y ^ i 2 subject to y i = z i j i = 1 , , N ; j N i z j i = z i j
The Augmented Lagrange Function is defined as follows:
H 3 ( y , z , τ ) = 1 2 i = 1 N y i y ^ i 2 + j N i τ i j T ( y i z i j ) + ν 2 i = 1 N y i z i j .
The ADMM solution acts in three steps repetitively until the tolerance achieves:
  • Compute y i :
    y i [ p + 1 ] = ( 1 + ν d i ) 1 { y ^ i [ k ] + ν j N i z i j [ p ] j N i τ i j [ p ] } .
  • Compute z i j :
    z i j [ p + 1 ] = y i [ p + 1 ] + y j [ p + 1 ] 2 + τ i j [ p ] + τ j i [ p ] 2 ν .
  • Lagrange multiplier update:
    τ i j [ p + 1 ] = τ i j [ p ] + ν ( y i [ p + 1 ] z i j [ p + 1 ] )
Herein, p is a iteration of the ADMM optimization process, then this output of this process is y i [ k + 1 ] = y i * [ p + 1 ] .
The distributed algorithm is illustrated in Algorithm 1.
The convergence analysis of the ALADIN method has been studied clearly for both non-convex and convex optimization problem in [25]. Lemma 3 in [25] has been proven that with the cost function f ( α ) = 1 2 t = 1 h i V j N i ( α t , j α t , i ) 2 being twice continuously differentiable and letting ( α t * , λ * ) for t = 1 , , h of problem (5) be a regular KKT point. On the other hand, the Hessian B t = 2 α t 2 ( 1 2 α t T L α t ) + ρ Σ t 0 obviously, since Σ t 0 . There exists constants χ 1 , χ 2 such that for every point α t , λ satisfying the condition convergence of the decoupled minimization problem (7) have unique locally minimizers { y t , t = 1 , , h } that satisfy y t α t χ 1 α t α t * + χ 2 λ λ * .
Moreover, if 1 μ < 0 ( y t α t ) when solving the QP (11), with the ( ( α t * , λ * ) ) is a regular KKT point, then χ 1 α t α t * + χ 2 λ λ * ( χ 1 + χ 2 ) ω 2 ( χ 1 α t α t * + χ 2 λ λ * ) 2 . This is sufficient to prove local quadratic convergence of the algorithm as χ 1 , χ 2 are strictly positive constants. As a result, it is effectively applied to our proposed method since it is obviously an equality constrained non-convex optimization problem.
One thing to remark here is that in our study, the penalty parameters ρ , μ can be updated using the following rules:
ρ [ k + 1 ] = { ι ρ ρ [ k ] if ρ [ k ] < ρ m a x ρ [ k ] elsewhere
μ [ k + 1 ] = { ι μ μ [ k ] if μ [ k ] < μ m a x μ [ k ] elsewhere
where ι ρ , ι μ > 1 and ρ m a x = 50 , μ m a x = 30 obtained by experience to avoid the numerical problem. Moreover, we can use the blockwise and damped Broyden–Fletcher–Goldfarb–Shanno (BFGS) update, which ensures positive definiteness of the B t [ k ] to preserve the convergence properties of ALADIN proposed in [25].

3.2. Retrieving the Non-Zero Laplacian Eigenvalues

As stated before, the set of eigenvalues deriving from the Algorithm 1, denoted S 1 , composing of the set of non-zero distinct Laplacian eigenvalues S 2 .
Let x ¯ ^ i be the final consensus value reconstructed by x ¯ ^ i = x ^ i ( h ) = 1 N i = 1 N x i ( 0 ) and the iteration scheme of the finite-time average consensus is implemented as in (1). Following the idea of the Proposition 3 in [27], we step-by-step assume to leave one element of the S 1 , then, the remaining elements of this set are used to reconstruct x ^ i ( h ) . If x ¯ ^ i = x ^ i ( h ) is satisfied, then the left element is not the expected Laplacian eigenvalue. Hence, we can eliminate it out of the set S 1 . Otherwise, the left element is one of non-zero distinct eigenvalues. We restore it in the set S 1 and marked as an element in the set S 2 . Now, the procedure is continued with another element to the end.
The distributed non-zero Laplacian eigenvalues are described in Algorithm 2.
Algorithm 1. ALADIN-based Laplacian eigenvalues estimation
  • Initialization:
    • Number of nodes N, tolerance ϵ , initial input-output pairs { x i ( 0 ) , x ¯ i , i = 1 , 2 , , N } , where x ¯ = 1 N i = 1 N x i ( 0 ) is retrieved from a standard average consensus algorithm.
    • Each node i , i = 1 , , N initializes:
      (a)
      random stepsizes α t , i ( 0 ) for t = 1 , , h = N 1 .
      (b)
      random Lagrange multipliers β t , i , η t , i , for t = 1 , , h and λ i , λ Q P , i .
      (c)
      Semi-positive definite matrices Σ t N , learning rates b, penalty parameters ρ , c , μ .
    • Set k = 0 ;
  • Repeat:
    • Set k : = k + 1 ,
    • Solving decouple NLP problem (7) for t = 1 , , h = N 1 :
      -
      Propagate Lagrange multipliers β t , i [ k ] for t = h , , 2 and i = 1 , , N :
      (a)
      Set δ h , i [ k ] = β t , i [ k ] .
      (b)
      δ t 1 , i [ k ] = δ t , i [ k ] + α t , i [ k ] j N i ( δ t , j [ k ] δ t , i [ k ] ) .
      -
      Finite-time average Consensus steps:
      x t , i [ k ] = x t 1 , i [ k ] + α t , i [ k ] j N i ( x t 1 , j [ k ] x t 1 , i [ k ] ) .
      -
      Propagate the error e t , i [ k ] by setting e h , i [ k ] = x h , i [ k ] x ¯ i [ k ] :
      e t 1 , i [ k ] = e t , i [ k ] + α t , i j N i ( e t , j [ k ] e t , i [ k ] ) .
      -
      Update α t , i for t = 1 , , h :
      α t , i [ k + 1 ] = α t , i [ k + 1 ] b j N i ( α t , j [ k ] α t , i [ k ] ) b λ i b ρ [ k ] j = 1 N Σ t ( i , j ) ( α t , j [ k ] y t , j [ k ] ) + b j N i ( x t 1 , j [ k ] x t 1 , i [ k ] ) δ t , i [ k ] + b j N i ( x t 1 , j [ k ] x t 1 , i [ k ] ) e t , i [ k ]
      -
      Update NLP Lagrange multipliers β t , i for t = 1 , , h by (16).
    • Stopping criteria: if t = 1 h ( α t y t ) ϵ and t = 1 h α t T L α t ϵ are simultaneously satisfied, then stop the optimization procedure.
    • Compute Gradient, Jacobian matrices, and Hessian matrices as in (8)–(10), respectively.
    • Solving the coupling QP problem (11) via solving the linear Equation (19) to define Δ α t , i * [ k ] , λ Q P , i * [ k ] .
    • Update λ , y ^ t , i :
      (a)
      λ i [ k + 1 ] = λ Q P , i * [ k ]
      (b)
      y ^ t , i [ k ] = α t , i [ k + 1 ] + Δ α t , i * [ k ]
      (c)
      Projecting y ^ t , i [ k ] onto the constraint C by solving an ADMM-based optimization subproblem (20) to derive y t , i [ k + 1 ]
Algorithm 2. Non-zero eigenvalues Estimation
  • Input: set of stepsizes, obtaining form Algorithm 1 S 1 , the input-output pairs { x i ( 0 ) , x ¯ i } , i = 1 , , N , the threshold ϵ .
  • Set S 2 = , S = S 1 .
  • Repeat: While S , pick an element α t out of S , hence S \ { α t } .
    • Construct average consensus iteration scheme from the remain stepsizes to determine x ^ i ( h ) , i = 1 , , N as follows:
      x t , i = x t 1 , i + α t , i j N i ( x t 1 , j x t 1 , i ) , t = 1 , , h
    • If x ¯ i x ¯ ^ i ϵ , then S = S \ α t and return to (3)
    • If x ¯ i x ¯ ^ i > ϵ , then α t is included in S 2 and S = S α t and return to (3)
  • Output: If S is empty, then the non-zero distinct Laplacian eigenvalues can be derived by taking the inverse of the set S 2 ’s elements.

3.3. Multiplicities Estimation

Now, turning to the last stage, which is the corresponding Laplacian eigenvalues multiplicities estimation. In [16], the authors have proposed a linear integer programming optimization problem to figure out the multiplicities:
Proposition 1
([16]). Consider a connected undirected graph of N vertices with degree sequence { d i } and Laplacian matrix L , having D = | S 2 | distinct non-zero Laplacian eigenvalues S 2 . Let m Z + D × 1 be the vector of the corresponding multiplicities and be obtained by solving the integer programming below:
min m Z + D × 1 S 2 T m subject to S 2 T m = i = 1 D d i 1 T m = N 1 m Z + D × 1
The proof was given in [16]. Since all the multiplicities m are positive integers, then Brand-and-Bound method has been deployed to derive m . Therefore, the problem (24) can be rewritten equivalently in linear integer programming form as follows:
min m Z + D × 1 S 2 T m i = 1 D d i subject to 1 T m = N 1 m Z + D × 1
The Algorithm for this problem has been described clearly in [16]. At this step, the whole Laplacian spectrum s p ( L ) has been obtained.
In fact, the estimation problem can be converted into a convex form and can be solved effectively by using ADMM-based method proposed in [16]. However, the purpose of this study is extremely appropriate for non-convex optimization problem through deploying the promising ALADIN-based method.

4. Simulation Results

In this section, the efficiency of the proposed ALADIN-based method to estimate the Laplacian spectrum is evaluated by considering the two following case studies.
Firstly, it can be said that the Laplacian spectrum can decide the performance of the network since it reveals the connection of the network. For example, the robustness of the network can be estimated before starting the operations to avoid the interruption during these operations and, as a result, enhance the benefits technically and economically.
On the purpose of monitoring the connection of a large network G * ( V * , E * ) , one may face with the numerical issue in step 3 of the ALADIN-based method to solve the linear system (19) due to the huge dimension of the obtained matrix that leads to the common ill-conditioning problem with the inverse matrix calculation.
In [16], the authors have suggested to partition the large network into M disjoint sub-networks U , = 1 , 2 , , M , [29]. Let us define N i * = { j V * : ( i , j ) E * } and its cardinality | N i * | as the set of neighbors of node i and its degree in G * , respectively. Each sub-network is monitored by a super-node i which knows the number N of agents in the sub-network and the associated average information state x . Node i U is a super-node if it has at least one neighbor in a different subset, i.e., * s.t. N i U * . Let us consider that two sub-networks are connected if there exist edges linking at least two agents of these sub-networks. If two sub-networks are connected then their super-nodes are linked as showed in Figure 2. Here, the network can be social network, power system network, molecular network, etc.
Let G = ( V , E ) be the undirected graph representing a network with N = | V | super-nodes, which are black nodes in Figure 2. G captures the interaction between sub-networks of G * . Therefore, the large network is to be robust if the partitions are strongly linked to each other and if the critical threshold is high enough in [16]. Then, the Laplacian spectrum of network of super-nodes G can monitor the connection of the large network G * via the robustness index.

4.1. Case Study 1

Let us consider a large network partitioning into 4 disjoint sub-networks. Each sub-network has only one super-node. These super-nodes interact with each other by the graph G = ( V , E ) , depicted in Figure 3.
This network does have the Laplacian eigenvalues s p ( L ) = { 0 , 4 , 4 , 4 } . With the parameter of each super-node, denoted as x ( 0 ) = { 0.7417 , 0.7699 , 0.3216 , 0.5466 } , after deploying Algorithm 1, the set of α t , t = 1 , , 3 is obtained. The nodes trajectories are described as in Figure 4.
As can be seen, all α t execute the consensus problem at the beginning of the procedure, and then dig into satisfying the constraint.
Figure 5 illustrate the convergence of the cost function 1 2 t = 1 N 1 α t T L α t in according to the constraint x ( N 1 ) = x ¯ 1 .
Furthermore, the Algorithm 2 is applied to eliminate the unexpected eigenvalues. As a result, we receive only one eigenvalue λ = 1 0.25 = 4 . In order to accomplish the Laplacian spectrum estimation’s procedure, we make use the Proposition 1 to pick out m = 3 . Now, the entire Laplacian spectrum is achieved.
At this step, the robustness index such as Kirchhoff index or the number of spanning trees can be calculated.
R L = N i = 2 D + 1 m i λ i = 4 3 4 = 3 .
Now, let us see how the proposed procedure works via the table below.
Table 1 is obtained after executing the proposed procedure.
As can be seen in Figure 4, at the iteration of around 90 the procedure can be stopped. In order to access the robustness of the whole large network, it is necessary to define the critical threshold, introduced in [16].
Next, we implement a comparison with the Lagrange method described in [1] by considering Case study 2.

4.2. Case Study 2

Let us consider a 6-node network described in Figure 6.
It is known that for this topology, the Laplacian spectrum is s p ( L ) = { 0 , 1 , 2 , 3 , 3 , 5 } .
Let us define the same initial information state of each node at time t = 0 : x ( 0 ) = { 0.5832 , 0.74 , 0.2348 , 0.7350 , 0.9706 , 0.8669 } for both methods. By using a standard consensus algorithm [26], the consensus value x ¯ = 0.6884 can be easily inferred.
It can be seen in Figure 7 that the Lagrange-based method in [1] takes a long time to achieve the consensus first and then track the constraint to obtain the stepsizes α t , t = 1 , , N 1 .
Now, with the same initial α t ( 0 ) , t = 1 , , 5 , for the iterative procedure of the proposed method, the α t after using the Algorithm 1 with the convergence trajectories of the α t are illustrated in Figure 8.
Figure 7 and Figure 8 express the significant pros of our proposed method since the number of iterations is much less than that of Lagrange-based method. The consensus term is executed in advance from the start of the procedure and then try to reach the constraint term to figure out the expected values of the stepsizes α t = { 1 , 0.5 , 0.1027 , 0.2 , 0.3333 } . Obviously, since the authors operate the proposed algorithm with the number of α t being N 1 = 5 , it is needed to run the next stage to eliminate the residual values. As can be seen clearly, the executive time for ALADIN-based method is significantly faster than the proposed method as showed in Figure 9.
Figure 9 shows that the ALADIN-based method approaches the destined values earlier than Lagrange-based method. Furthermore, in order to get the non-zero Laplacian eigenvalues of the given network, the Algorithm 2 is carried out to obtain vector of stepsizes α t = { 1 , 0.5 , 0.2 , 0.3333 } .
Finally, by constructing the Brand-and-Bound based method to solve the Problem 1, which has been proposed in [16], we achieve the vector of multiplicities m = { 1 , 1 , 1 , 2 } , hence deduce the Laplacian spectrum s p ( L ) = { 1 , 2 , 3 , 3 , 5 } .
Now, let us see how the proposed procedure works via the table below.
Table 2 is obtained after executing the proposed procedure and the Lagrange-based method.
Notice that the proposed method gives results much better than the Lagrange-based method. Let us see that at the iteration of 33,950, the Lagrange-based method gives the set of S 1 that has still not satisfied the constraint.
Recently, besides the Laplacian spectrum estimation basing on the optimization approaches, there are also some works that approximate the Laplacian spectrum via iterative dynamics process (taking random walk for an example). However, from our point of view, the dominant contribution of our proposed method is the possibility of implementation in a decentralized manner. Moreover, another method to faster the convergence rate of the Laplacian spectrum estimation procedure is the ADMM-based method, proposed in [16]. The important step in this work is to re-parameterize adequately the non-convex formulation into convex one. It is hard to compare the efficiency between ADMM-based method and the proposed method. Since, our study focuses on the non-convex formulation.

5. Conclusions

In this paper, the authors have proposed a promising ALADIN-based method to find out the Laplacian spectrum of a given dynamic network in a distributed way. First and foremost, the study has assumed that the number of agents N in the network can be accumulated by random walk algorithm constructed in the configuration step. Briefly speaking, the proposed procedure is divided into 3 stages. The first stage is to determine the N 1 Laplacian eigenvalues. Then, retrieve the non-zero distinct Laplacian eigenvalues in stage 2 before estimating the corresponding multiplicities in stage 3. The ALADIN-based method is appropriate for carrying out the factorization of the average consensus matrix as factors of the Laplacian-based consensus matrices to minimize the disagreement between neighbors on the values of α t , i . Then, the Laplacian eigenvalues can be defined by taking the inverse of these α t . Herein, the authors are obviously interested in dealing with the non-convex optimization problems. From the simulation evaluation, it can be concluded that the proposed method converges much faster in comparison with the gradient descent method in [1] for the estimation of Laplacian spectrum.

Author Contributions

Conceptualization, methodology, funding acquisition, T.-M.-D.T.; writing—original draft preparation, T.-M.-D.T. and L.N.A.; resources L.N.A., writing—review and editing, T.-M.-D.T. and N.C.N.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by The University of Danang - University of Science and Technology, code number of Project: T2019-02-09.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 2

From now on, in order to avoid misunderstanding between the iterative step of the consensus algorithm x ( h ) and the iterative step of the optimization procedure k, let us denote x ( h ) by x h . From Section 2.1, we have:
x ( h ) x ¯ = i = h t + 1 W i W t x t 1 x ¯ = i = h t + 1 W i ( I d i a g ( α t ) L ) x t 1 x ¯ = i = h t + 1 W i x t 1 x ¯ i = h t + 1 W i d i a g ( α t ) L x t 1 = i = h t + 1 W i x t 1 x ¯ ( x t 1 T L T i = h t + 1 W i ) α t
with ⊙ being the Khaitri–Rao product. By employing the property of the Khatri–Rao product (Given matrix A R I × F , and two vectors b R J × 1 , d R F × 1 , then A d i e g ( d ) b = ( b T A ) d ) in [1], the derivative of the Lagrange function is described as follows:
H 1 , t α t = L α t + λ + ρ Σ t ( α t y t ) d i a g ( Lx t 1 ) i = t + 1 h W i β t c . d i a g ( Lx t 1 ) i = t + 1 h W i ( x ( h ) x ¯ )
Since δ h = β t and δ h 1 = W h δ h then i = t + 1 h W i β t = δ t .
Analogically, e h = x ( h ) x ¯ and e t = W t + 1 e t + 1 .
On the other hand, x t 1 x t = d i a g ( α t ) L x t 1 then L x t 1 = d i a g 1 ( α t ) ( x t 1 x t )
Therefore, H 1 , t α t = L α t + λ + ρ Σ t ( α t y t ) d i a g 1 ( α t ) d i a g ( x t 1 x t ) δ t d i a g 1 ( α t ) d i a g ( x t 1 x t ) e t .

Appendix B. KKT Conditions of QP (11)

δ H 2 Δ α t = B t Δ α t + g t + λ Q P + C t T η t = 0
δ H 2 s = λ λ Q P + μ s = 0
δ H 2 η t = C t Δ α t = 0
δ H 2 λ Q P = t = 1 h Δ α t + t = 1 h ( α t y t ) s = 0

References

  1. Tran, T.; Kibangou, A.Y. Consensus-based Distributed Estimation of Laplacian Eigenvalues of Undirected Graphs. In Proceedings of the European Control Conference (ECC), Zurich, Switzerland, 17–19 July 2013; pp. 227–232. [Google Scholar]
  2. Schwab, K. The Fourth Industrial Revolution; World Economic Forum: Cologny, Switzerland, 2016. [Google Scholar]
  3. Godsil, C.; Royle, G. Algebraic Graph Theory; Springer: Berlin, Germany, 2001. [Google Scholar]
  4. Merris, R. Laplacian matrices of a graph: A survey. Linear Algebra Appl. 1994, 197, 143–176. [Google Scholar] [CrossRef] [Green Version]
  5. Friedler, M. Algebraic connectivity of graphs. Czechoslov. Math. J. 1973, 23, 298–305. [Google Scholar]
  6. Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef] [Green Version]
  7. Olfati-Saber, R.; Murray, R. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 2004, 49, 1520–1533. [Google Scholar] [CrossRef] [Green Version]
  8. Gulzar, M.; Rizvi, S.; Javed, M.Y.; Munir, U.; Asif, H. Multi-Agent Cooperative Control Consensus: A Comparative Review. Electronics 2018, 7, 22. [Google Scholar] [CrossRef] [Green Version]
  9. Kempton, L.C.; Herrmann, G.; di Bernardo, M. Distributed optimisation and control of graph Laplacian eigenvalues for robust consensus via an adaptive multilayer strategy. Int. J. Robust Nonlinear Control 2017, 27, 1499–1525. [Google Scholar] [CrossRef] [Green Version]
  10. Xiao, L.; Boyd, S. Fast linear iterations for distributed averaging. Syst. Control Lett. 2004, 53, 65–78. [Google Scholar] [CrossRef]
  11. Kibangou, A. Finite-time average consensus based protocol for distributed estimation over awgn channels. In Proceedings of the IEEE Conference on Decision and Control (CDC), Orlando, FL, USA, 12–15 December 2011. [Google Scholar]
  12. Kibangou, A. Graph Laplacian based matrix design for finite-time distributed average consensus. In Proceedings of the American Control Conference (ACC), Montreal, QC, Canada, 27–29 June 2012; pp. 1901–1906. [Google Scholar]
  13. Abbas, W.; Egersredt, M. Robust graph Topologies for networked systems. In Proceedings of the 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems (NeCSYS), Santa Barbara, CA, USA, 13–14 September 2012; pp. 85–90. [Google Scholar]
  14. Wu, J.; Barahona, M.; Tan, Y.; Deng, H. Spectral Measure of Structural Robustness in Complex Networks. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2011, 41, 1244–1252. [Google Scholar] [CrossRef]
  15. Klein, D.; Randić, M. Resistance distance. J. Math. Chem. 1993, 12, 81–95. [Google Scholar] [CrossRef]
  16. Tran, T.; Kibangou, A.Y. Collaborative Network Monitoring by Means of Laplacian Spectrum Estimation and Average Consensus. Int. J. Autom. Control Syst. 2019, 17, 1826–1837. [Google Scholar] [CrossRef]
  17. Zhao, C.; He, J.; Cheng, P.; Chen, J. Consensus-based energy management in smart grid with transmission losses and directed communication. IEEE Trans. Smart Grid 2017, 8, 2049–2061. [Google Scholar] [CrossRef]
  18. Guo, L.; Zhao, C.; Low, S.H. Graph Laplacian Spectrum and Primary Frequency Regulation. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami Beach, FL, USA, 17–19 December 2018. [Google Scholar]
  19. Franceschelli, M.; Martini, S.; Egerstedt, M.; Bicchi, A.; Giua, A. Observability and controllability verification in multi-agent systems through decentralized Laplacian spectrum estimation. In Proceedings of the IEEE Conference on Decision and Control (CDC), Atlanta, GA, USA, 15–17 December 2010; pp. 5775–5780. [Google Scholar]
  20. Cooper, C.; Radzik, T.; Siantos, Y. Estimating network parameters using random walks. In Proceedings of the 2012 Fourth International Conference on Computational Aspects of Social Networks (CASoN), Sao Carlos, Brazil, 21–23 November 2012; pp. 33–40. [Google Scholar]
  21. Franceschelli, M.; Gasparri, A.; Giua, A.; Seatzu, C. Decentralized Estimation of Laplacian Eigenvalues in Multi-Agent Systems. Automatica 2013, 49, 1031–1036. [Google Scholar] [CrossRef] [Green Version]
  22. Sahai, T.; Speranzon, A.; Banaszuk, A. Hearing the cluster of a graph: A distributed algorithm. Automatica 2012, 48, 15–24. [Google Scholar] [CrossRef] [Green Version]
  23. Kibangou, A.Y.; Commault, C. Decentralized Laplacian Eigenvalues Estimation and Collaborative Network Topology Identification. In Proceedings of the 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems (NecSys’12), Santa Barbara, CA, USA, 14–15 September 2012; pp. 7–12. [Google Scholar]
  24. Tran, T.; Kibangou, A. Distributed estimation of Laplacian eigenvalues via constrained consensus optimization problems. Syst. Control Lett. 2015, 80, 56–62. [Google Scholar] [CrossRef]
  25. Houska, B.; Frasch, J.; Diehl, M. An Augmented Lagrangian Based Algorithm for Distributed NonConvex Optimization. SIAM J. Optim. 2016, 26, 1101–1127. [Google Scholar] [CrossRef]
  26. Xiao, L.; Boyd, S.; Kim, S. Distributed Average Consensus with Least-mean-square Deviation. J. Parallel Distrib. Comput. 2007, 67, 33–46. [Google Scholar] [CrossRef] [Green Version]
  27. Tran, T.; Kibangou, A. Distributed Estimation of Graph Laplacian Eigenvalues by the Alternating Direction of Multipliers Method. In Proceedings of the 19th World Congress of the International Federation of Automatic Control, Cape Town, South Africa, 24–29 August 2014. [Google Scholar]
  28. Chung, F.R.K. Spectral Graph Theory; American Mathematical Society: Providence, RI, USA, 1997. [Google Scholar]
  29. Martin, N.; Frasca, P.; Canudas-De-Wit, C. Large-scale network reduction towards scale-free structure. IEEE Trans. Netw. Sci. Eng. 2018, 14, 1–12. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Structure of Section 3.
Figure 1. Structure of Section 3.
Applsci 10 05625 g001
Figure 2. Network partitioned in 4 subsets. Super-nodes are depicted in black.
Figure 2. Network partitioned in 4 subsets. Super-nodes are depicted in black.
Applsci 10 05625 g002
Figure 3. A network constituted by 4 nodes.
Figure 3. A network constituted by 4 nodes.
Applsci 10 05625 g003
Figure 4. Trajectories convergence of α t .
Figure 4. Trajectories convergence of α t .
Applsci 10 05625 g004
Figure 5. Convergence of the cost function according to its constraint.
Figure 5. Convergence of the cost function according to its constraint.
Applsci 10 05625 g005
Figure 6. A new network constituted by 6 nodes.
Figure 6. A new network constituted by 6 nodes.
Applsci 10 05625 g006
Figure 7. α t convergence trajectories implemented by Lagrange-based method.
Figure 7. α t convergence trajectories implemented by Lagrange-based method.
Applsci 10 05625 g007
Figure 8. α t convergence trajectories implemented by the proposed method.
Figure 8. α t convergence trajectories implemented by the proposed method.
Applsci 10 05625 g008
Figure 9. Convergence of the cost function according to its constraint.
Figure 9. Convergence of the cost function according to its constraint.
Applsci 10 05625 g009
Table 1. The achievement of the proposed procedure in the sense of 4-node topology.
Table 1. The achievement of the proposed procedure in the sense of 4-node topology.
S 1 { 0.25 , 0.5919 , 0.5543 }
S 2 { 0.25 }
m 3
Iterations130
R L 3
Table 2. The achievement of two methods in the sense of 6-node topology.
Table 2. The achievement of two methods in the sense of 6-node topology.
ALADIN-Based MethodLagrange-Based Method
S 1 { 1 , 0.5 , 0.1027 , 0.2 , 0.3333 } { 0.9992 , 0.2 , 0.5001 , 0.0895 , 0.3333 }
S 2 { 1 , 0.5 , 0.2 , 0.3333 } { 0.9992 , 0.2 , 0.5001 , 0.3333 }
m 1 , 1 , 1 , 2
Iterations33950372800
R L 14.2

Share and Cite

MDPI and ACS Style

Tran, T.-M.-D.; Ngoc An, L.; Doan, N.C.N. Concensus-Based ALADIN Method to Faster the Decentralized Estimation of Laplacian Spectrum. Appl. Sci. 2020, 10, 5625. https://doi.org/10.3390/app10165625

AMA Style

Tran T-M-D, Ngoc An L, Doan NCN. Concensus-Based ALADIN Method to Faster the Decentralized Estimation of Laplacian Spectrum. Applied Sciences. 2020; 10(16):5625. https://doi.org/10.3390/app10165625

Chicago/Turabian Style

Tran, Thi-Minh-Dung, Luu Ngoc An, and Ngoc Chi Nam Doan. 2020. "Concensus-Based ALADIN Method to Faster the Decentralized Estimation of Laplacian Spectrum" Applied Sciences 10, no. 16: 5625. https://doi.org/10.3390/app10165625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop