Preprint
Article

A Distributed Algorithm for Reaching Average Consensus in Unbalanced Tree Networks

Altmetrics

Downloads

85

Views

16

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

16 August 2024

Posted:

19 August 2024

You are already at the latest version

Alerts
Abstract
In this paper, a distributed algorithm for reaching average consensus is proposed for multi-agent systems with tree communication graph, when the edge weight distribution is unbalanced. First, the problem is introduced as a key topic of core algorithms for several modern scenarios. Then, the relative solution is proposed as a finite-time algorithm, which can be included in any application as a preliminary setup routine, and it is well-suited to be integrated with other adaptive setup routines, thus making the proposed solution useful in several practical applications. A special focus is devoted to the integration of the proposed method with a recent Laplacian eigenvalue allocation algorithm, and the implementation of the overall approach in a wireless sensor network framework. Finally, a worked example is provided, showing the significance of this approach for reaching a more precise average consensus in uncertain scenarios.
Keywords: 
Subject: Engineering  -   Control and Systems Engineering

1. Introduction

In the last decades, research on the collective behavior of a team of agents through iterative local interactions has been intense, due to the significance of applications in a wide range of fields, from computer science to social networks, from complex technological networks (e.g. electric smart grid), to biological ecosystems [1]. In each framework, a fundamental property to reach agreement among all agents through local interactions is the consensus condition, which happens when all agents recursively update a local variable using local information to asymptotically get a common value for all participants [2].
One special yet fundamental consensus condition regards the value of such agreement as a function of the initial conditions of all nodes, and, in particular, the average of all initial conditions of the agents is known in the literature as the average consensus [3,4]. This special condition is adopted in a large number of applications of distributed estimation, WSN algorithms, robotics, and many more [5].
Average consensus is the natural equilibrium condition for a multi-agent system whenever the communication graph is undirected and unweighted, or more in general, whenever it is weight-balanced, and this special condition allows solving several basic goals in a distributed fashion [1].
Unfortunately, one fundamental limitation when dealing with classical average consensus through the use of undirected communication graphs with the traditional symmetric edge weight assignment is a limited convergence rate, which significantly decreases when the number of participants grows [5]. This condition has the drawback of a large time span for convergence, and in turn, this makes consensus not even possible in practice when the unavoidable presence of communication delays, computation errors or agents’ faults over a large time span are accounted for [6].
However, the practical significance of this condition for the large number of the applications referenced above pushed the scientific community to a significant effort to derive sophisticated algorithms for general unbalanced graphs, as, for example, through the use of additional state variables [7], dedicated atomic transactions between pairs of neighbors [8], a chain of two integrators that are coupled with a distributed estimator [9] average tracking with incomplete measurement through a distributed averaging filter and a decentralized tracking controller [10], and recently considering also signed networks [11].
In the last few years, advances have been made in adopting the strategy of assigning asymmetric weights to neighbors’ edges [12]. Indeed, it was found that a proper choice of asymmetric weights can significantly improve the convergence rate, even over the symmetric optimal design [13], and it makes the convergence rate independent of the size of the graph with asymmetric weights, which is in striking contrast with the fundamental limitations of symmetric weights [1]. On the other hand, the choice of asymmetric weights makes the network converge to a different function of the nodes instead of the average of the initial conditions.
In general, there are several scenarios where edge weight asymmetry should be taken into account, other than a design choice. Indeed, empirical research clearly shows that real-world networks usually have a large heterogeneity in the intensity or capacity of the connections (and hence the weights of the links) [14]. These asymmetries are related to defects and uncertainties in the network, and they result in an inaccurate and biased consensus condition (namely, the asymptotic values of network nodes are only roughly close to each other). The technique proposed in this paper can be useful in these scenarios, as it would produce a much more precise consensus condition.
The approach proposed in this paper can be combined with an asymmetric design of the edge weights as in [15] to achieve the ambitious goal of average consensus with prescribed dynamics. Analogous techniques have been adopted to achieve prescribed-time consensus [16]. The opportunity of setting the Laplacian eigenvalues through an appropriate choice of edge weights has also a beneficial effect on security; indeed most of the algorithms designed to monitor network evolution to exclude the presence of edge faults [17] or malicious nodes [18] require the knowledge of eigenstructure of the network It is worth noting that malicious attacks can destabilize the entire system when there is access of at least one of its eigenvalues [19].
The theoretical backbone of the proposed approach is the distributed estimation of the Perron vector, which allows for a scaling of the initial conditions to reach average consensus, thus combining the beneficial effects of asymmetric edge weight assignment with the large practical applications of average consensus problems. It is worth remarking that the use of Perron vector to achieve a more precise average estimation is discussed in [20]. The results developed in this paper are inspired by [21], as described in detail in Section 3.
The paper is organized as follows. After a brief description of the notation hereafter, in Section 2 some motivating examples of the research pursued in this paper are described, and hence two facets of the problem are stated. In Section 3, the main theoretical results are provided for two graph topologies, namely path graphs and star graphs, which are prodromal for the inductive general solution, whose theoretical backbone is given in Section 4. These results are then exploited in Section 5, where the iterative algorithm is deduced through a worked example. A final simulation is provided in Section 6, where the application of the results to an uncertain WSN network, computing the average value of an environmental quantity, shows the effectiveness of the proposed results to achieve a more precise consensus value.

Notation

Here we briefly describe the notation adopted along the paper. We denote by N , R , R 0 , R + the set of natural, real numbers and non-negative real numbers, and positive real numbers. We denote by 0 d , d N , the d-dimensional vectors with components equal to 0 and by 1 d those vectors whose components are all 1. 0 d 1 × d 2 2 , d 1 , d 2 N , is the d 1 × d 2 matrix made by all zero entries. Vector e i denotes the i-th canonical vector, e.g. e 1 = 1 0 . . . 0 . Vectors are denoted in bold letters. For a vector v R d we denote v i the ith component of v so that v = [ v 1 . . . v d ] . The spectral radius of A is the maximum modulus of its eigenvalues, namely ρ A = max i { 1 , n } { | λ i | where λ i Λ A } .
A nonnegative (positive) matrix A R n × n satisfies [ A ] i j 0 ( [ A ] i j > 0 ). A permutation matrix P is a matrix obtained by permuting the rows of the n × n identity matrix. A nonnegative matrix A R n × n is reducible if there exist a permutation matrix P such that the matrix P A P is in block triangular form, i.e.:
P A P = B 11 B 12 0 B 22 .
where B 11 , B 22 are square matrices. An irreducible matrix is a matrix that is not reducible. A nonnegative matrix A R n × n is primitive if there exist a positive integer m such that A m > 0 .
A graph G = ( V , E ) is made of a vertex set  V = { 1 , 2 , . . . , n } and an edge set E V × V . It is undirected if ( j , i ) E if and only if ( i , j ) E . The neighbors set of a node i V is defined as N i = { j V | ( i , j ) E } . A path connecting vertex j 1 with j k + 1 is a subset of nodes connected by graph edges { j such that ( j , j + 1 ) E } | [ 1 , k ] . An undirected graph G is connected if there is a path connecting i and j for every pair of vertices i , j V such that i j .
An undirected graph T with no cycles is called a tree if it is connected, otherwise it is called forest. For a vertex w of a tree T , T w denotes the forest obtained from T by deleting w, and it is made of | N w | trees. For any v neighbor of w, v N w , T v denotes the subtree of T w having v as a vertex. The center (or Jordan center) of a graph is the set of all vertices u V where the greatest distance d ( u , v ) to other vertices v V is minimal. Equivalently, it is the set of vertices with eccentricity equal to the graph’s radius. Every tree has a center consisting of one vertex or two adjacent vertices. The center is the middle vertex or middle two vertices in every longest path.
For a graph G = ( V , E ) , the adjacency matrix  A R n × n is the matrix [ A ] i j = 1 if ( i , j ) E and [ A ] i j = 0 if ( i , j ) E , the weighted adjacency matrix as [ A w ] i j = α i j with α i j > 0 if ( i , j ) E and [ A w ] i j = 0 if ( i , j ) E . The weighted Laplacian matrix is built upon the rule [ L w ] i j = [ A w ] i j if i j , [ L w ] i i = j = 1 , i j n [ A w ] i j . By construction, L w 1 = 0 , i.e. the Laplacian is a zero-row sum matrix. In the following, given a graph G , L w ( G ) denotes the set of all n × n  zero row-sum G -structured square matrices. There are connections between the properties of nonnegative matrices and their related graphs explaining their zero-nonzero pattern, as the following result shows [1]:
Lemma 1.1. 
A matrix A R n × n is irreducible if and only if its relative graph G A encoding its zero-nonzero pattern is strongly connected.

2. Motivating Examples

In this Section, we introduce the abstract mathematical framework of our problem, namely average consensus for a multi-agent system running distributed iterative algorithm to accomplish a global task, which is usually in terms of network coordination or synchronization. After that, several applications in modern fields adopting the described framework are reported.

2.1. Multi-Agent Systems Running Consensus Algorithms

Here, we introduce the mathematical framework of multi-agent systems running consensus algorithm [1,2].
This setting is made of a set of N nodes, each holding a variable x i ( t ) , i { 1 , , N } updated as x ˙ i ( t ) = u i ( t ) where u i ( t ) is set by each node i to update its value. Some nodes of the team can communicate between themselves, and two neighbor nodes i, j are able to exchange their values. In this setting, each node i assigns as input u i ( t ) = j N i k i j ( x j ( t ) x i ( t ) ) where k i j are set by node i. The resulting evolution of the team can be effectively computed using the aggregate vector x R N chosen as [ x ] i ( t ) = x i ( t ) and updating as x ˙ ( t ) = L κ x ( t ) , which gives rise to the Laplacian flow [1]:
x ( t ) = Φ ( t ) x ( 0 ) , Φ ( t ) = e L κ t = i = 0 ( L κ ) i t i i ! ,
for a L κ in the set L w . Network evolution is dictated by the spectrum of L κ [22]. It is worth mentioning that the design of asymmetric gains was recently proposed [13,23,24] as a key feature to improve the convergence rate, and it was recently explored also in [25,26,27].
Other multi-agent processes are better described by discrete updates of a local quantity, namely x ( t + 1 ) = ( I γ L κ ) x ( t ) , where γ R + is called coupling factor and it should be chosen sufficiently small to keep the system stable [1], and analogously in this case one gets the following global evolution:
x ( t ) = Φ ( t ) x ( 0 ) , Φ ( t ) = I γ L κ t
Both in case of of (1) and (2), matrix Φ ( t ) shows some interesting properties inherited from the properties of L κ that L κ 1 = 0 , w L L κ = 0 , namely [1]:
Φ 1 = 1 , w L k Φ = w L k , Φ ( t ) is positive if G is strongly connected ,
Moreover, even if the eigenvalues of matrix Φ ( t ) are related to those of L k according to λ Φ = e λ L k t for systems (1) and λ Φ = ( 1 γ λ L k ) t when dealing with (2), eigenvectors of (1) and (2) do coincide.
By direct consequence of (3), system (1) and (2) are called consensus networks, namely for every choice of the initial conditions x i ( 0 ) , i = 1 , 2 , , N , there exists α R such that:
lim t + x i ( t ) = α , i { 1 , , N } ,
or, equivalently, lim t + x ( t ) = α 1 N , α R . The constant α is called the consensus value (or collective decision) [2] for system (1), corresponding to the given initial conditions. The consensus value is a key tool in many applications, indeed it encapsulates a global information, equal to:
α = w L x ( 0 ) w L 1 N .
where w L is a left eigenvector of Φ ( t ) associated to the unitary eigenvalue and satisfying the normal condition w L 1 = 1 [28]. In a wide number of applications, it holds w L = 1 N 1 and Equation (5) simplifies to the average of the initial conditions of the whole network, i.e.:
α = i = 1 N x i ( 0 ) N
and we refer to this condition as average consensus.
Average consensus became a very popular and significant tool in many applications as for example sensor fusion and data aggregation, distributed optimization and machine learning, collective motion, vehicle coordination, and more [1]. However, one main issue of multi-agent systems running standard consensus protocols is a low convergence rate, which is related to the algebraic connectivity of graph G and it is often a key limiting feature of the original consensus protocol [1].
The fundamental mathematical background for our analysis is the renowned Perron-Frobenius Theorem [1]:
Theorem 2.1 
(Perron–Frobenius). Let A R n × n be an irreducible nonnegative matrix. Then:
  • A has a positive eigenvalue equal to ρ ( A ) , and it is a simple eigenvalue of A.
  • The left and right eigenvectors relative to the eigenvalue ρ ( A ) are positive.
It is worth remarking that, for any nonnegative matrix A R n × n , ρ ( A ) is an eigenvalue of A and the right and left eigenvectors of λ = ρ ( A ) can be selected non-negative. If A is additionally irreducible, then the multiplicity of λ = ρ ( A ) is one and the relative left and right eigenvector are strictly positive and uniquely determined (up to a scalar factor).
In general, the positive left and right eigenvectors relative to the eigenvalue ρ ( A ) are called left and right Perron vectors. However, considering Equation (3), vector 1 is always a right eigenvector of Φ so that in the following, we refer to the left eigenvector of 1 = ρ ( A ) as the Perron vector of Φ (e.g. vector w L κ in Equation (3)).
In the following of the paper, for a given Laplacian matrix L w related to a connected graph G , we denote by w L its left eigenvector corresponding to λ = 0 . If matrix L w is the generator of a dynamical flow x ( t ) = e L w x ( 0 ) , then w L is the Perron vector of the positive matrix e L w , and with a little abuse of notation we refer to w L as the Perron vector of L w .
The importance of the Perron vector in such applications is that it explains how the final distribution is [29], so it allows to detect aggregrations, clusters, or, conversely, the weight of each node influence on the asymptotic consensus value.
In the following, we provide some recent technological applications where the above framework has been exploited to design decentralized average consensus algorithms.

2.2. Some Examples of Modern Applications

The above setting is the abstract representation of several technological modern applications, where average consensus is the key methodology for distributed algorithm architectures, as described in the following.
In [5], the authors survey the scientific literature on distributed estimation and control applications using linear consensus algorithms. In the paper, there are interesting examples on how some classical estimation and control problems can be rephrased as the average value of some suitable quantities, thus allowing for being efficiently computed in a distributed fashion through average consensus algorithms.
Average consensus is the key strategy for sharing global information through local interactions. In the field of wireless sensor networks, clock synchronization is a fundamental problem and it can be efficiently solved through consensus [30]. In [31] the same synchronization problem is solved through average consensus in IoT applications. Within the above application scenarions, average consensus is adopted also for achieving the estimation of an environmental quantity [8].
In the field of Electric grid, Smart Power Grids and Renewable energies, average consensus-based algorithms are widely adopted for the following fundamental issues: (1) generators synchronization[32] (2) economic dispatch [33] (3) clock synchronization [34].
Finally, several tasks of great value in the area of robotic networks can be accomplished through the exploitment of average consensus. Among the others, it is worth mentioning the rendez-vous and, conversely, the deployment of a team of robots, which is fundamental for surveillance and, in general, for coverage purposes. Also attaining and keeping a formation can be executed through properly designed average consensus protocols, without recurring to a supervisory device [35]

2.3. Problem Statement

Inspired by the applications discussed in the previous Paragraph, we are now ready to state the Problem that we afford in the following of this paper.
Problem Statement 1. Given a weighted Laplacian matrix L κ L of a tree graph T , compute the (left) Perron vector of the positive matrix Φ ( t ) , either in case of (1) or (2).
However, in view of the motivations discussed above and considering the applications described so far, Problem Statement 1 can be rephrased in a more application-oriented enginnering fashion, as follows.
Problem Statement 2. Given a weighted tree graph T = ( V , E ) and a multi-agent system (either (1) or (2)) running consensus algorithm so that each node asymptotically reaches the value α as in (4) with α equal to (5), compute a set of scaling factors ζ i , i = 1 , , N such that the state vector χ ( t ) defined as ( χ ( t ) ) i = ( x ( t ) ) i ζ i converges to χ ( t ) t α 1 with α = i = 1 N x ( 0 ) N , thus solving the average consensus problem for the original problem.
The above two problems are indeed two facets of the same argument, the first being closer to a mathematical fashion while the second in an engineering style. It is worth noting that Statement 2 can be also effectively encapsulated into the Problem Statement of [15], which is related to the Laplacian eigenvalue allocation by a proper asymmetric edge weights assignment, so that the two problems can be integrated each other as follows:
Problem Statement 2-bis. Given a tree graph T = ( V , E ) , and one node triggering the distributed algorithm in [36] to allocate the Laplace spectrum and grounded Laplacian to, resp., λ i and μ i R (satisfying 0 < μ 1 < λ 2 < . . . < μ n 1 < λ n ), compute a set of scaling factors ζ i , i = 1 , , N such that the state vector χ ( t ) defined as ( χ ( t ) ) i = ( x ( t ) ) i ζ i converges to χ ( t ) t α 1 with α = i = 1 N x ( 0 ) N , thus solving the average consensus problem with prescribed dynamics adopting the asymmetric weight distribution.
Remark. 
The connection and equivalence of the different Problem Statements stated above is clearly established by considering Equation (5) and (6). Namely, the asymptotic value of χ ( t ) is χ ( t ) t α 1 with α = i = 1 N w χ ( 0 ) w 1 , so that, considering that ( χ ( 0 ) ) i = ( x ( 0 ) ) i ζ i and imposing:
w 1 w 1 χ 1 ( 0 ) + w 2 w 1 χ 2 ( 0 ) + = 1 N x 1 ( 0 ) + 1 N x 2 ( 0 ) +
one has that the system evolution seeks average consensus for any set of initial conditions x i ( 0 ) if the scaling factors ζ i are set equal to
ζ i = N w i 1 w .
Remark. 
Even if it is not explicitly stated in the above Problem Statement, a further feature of the proposed Algorithm is that it can be implemented distributedly, namely each component of the scaling vector can be computed through the use of local data.

3. Problem Solution

We start by seeking the solution of some special topologies, namely path graphs and star graphs, which constitute the topology of the most peripherical branches of any tree, as well as widely adopted graph topologies in general [1]. We further extend the solution to a generic n tree graphs.
In the case of path and star graph, we are able to provide the full characterization of the Perron vector by exploiting some recent mathematical results concerning structured matrix manipulation and inversion.
On the other hand, the general solution for tree graphs is inductive, and it can be implemented through an iterative algorithm.

3.1. Solution for Path Graphs

The Laplacian matrix of Path Graphs is a special case of tridiagonal matrix, so the theoretical background of the construction of the solution is based on [37] adapted to our special structure of matrices. The following statement is proved in [37], and it is our main mathematical tool to get the explicit solution of Path Graph.
Proposition 3.1 
([37]). Consider the tridiagonal matrix:
L = a 1 b 1 0 c 1 a 2 b 2 b n 1 0 c n 1 a n
and assume that L is nonsigular. Build the two sequences:
θ i = a i θ i 1 b i 1 c i 1 θ i 2 , for i = 2 , . . . , n , with initial conditions θ 0 = 1 and θ 1 = a 1
and
φ i = a i φ i + 1 b i c i φ i + 2 , for i = n 1 , . . . , 1 , with initial conditions φ n + 1 = 1 and φ n = a n ,
then the inverse of matrix L can be computed using the following formula:
[ L 1 ] i j = [ l ] ( 1 ) i + j b i b j 1 θ i 1 φ j + 1 / θ n , if i < j , θ i 1 φ j + 1 / θ n , if i = j , ( 1 ) i + j c j c i 1 θ j 1 φ i + 1 / θ n , if i > j ,
It is possible to exploit the above result for the computation of the Perron vector in the case of path graphs, as follows.
Proposition 3.2. 
Let α i , j R + , and consider the Laplacian matrix:
L w = α ( 1 , 2 α 1 , 2 α 2 , 1 a 2 , 2 α ( n 1 ) , n 0 α n , ( n 1 ) α n , ( n 1 )
with diagonal entries a i , i = α i , ( i 1 ) + α i , ( i + 1 ) for i = 2 , , n . The corresponding (left) Perron vector is equal to:
w L = κ 1 α 12 α 21 α 23 α 12 α 32 α 21 j = 2 i α ( j 1 ) , j α j , ( j 1 ) κ R { 0 } .
Proof. 
The proof is conducted by direct inspection on the relation w L w with L w as in (13) for a nonzero w R n . Consider the partition of L w as follows:
L w = α 12 w i d t h L 1 * L * 1 w i d t h L 1
with L 1 * = α 12 0 0 , L * 1 = α 21 0 0 and
L 1 = α 2 , 1 + α 23 α 2 , 3 0 α 3 , 2 a 3 , 3 0 α ( n 1 ) , n 0 α n , ( n 1 ) α n , ( n 1 ) ,
so that, taking w = w 1 w 1 ˜ with w 1 R , w 1 ˜ R n 1 , one has:
w L w = 0 w 1 α 12 + w 1 ˜ L * 1 = 0 w 1 L 1 * + w 1 ˜ L 1 = 0 ;
from the second relation it is possible to parametrize w as:
w = κ 1 L * 1 L 1 1 with κ R { 0 } .
Considering that L 1 * = α 12 e 1 , it is possible to find an explicit expression for (18) by computing the first row of L 1 1 through (12) to get:
L * 1 L 1 1 = α 12 1 α 21 α 23 α 21 α 32 α 23 α ( n 1 ) , n α 21 α 32 α n , ( n 1 )
so that, putting together (18) with (19), Equation (14) follows. □

3.2. Solution for Star Graphs

The explicit parametric structure of the Perron vector can be easily computed by direct inspection in the case of star graphs. Indeed,
Proposition 3.3. 
Let G be a star graph having n rays, and suppose that the central node is selected as first node. Let L w be its corresponding Laplacian matrix for any choice of edge weights, so that:
L w = a 11 α 12 α 1 n α 21 α 21 0 0 α n 1 0 α n 1
where
a 11 = j = 2 n α 1 j .
Then, the corresponding Perron vector is equal to:
w L = κ 1 α 12 α 21 α 13 α 31 α j , ( j + 1 ) α ( j + 1 ) , j κ R { 0 } .
Proof. 
The proof is conducted by direct inspection on the relation w L w , with L w as in (20) for a nonzero w R n . Consider the partition of L w as::
L w = a 11 w i d t h L 1 * L * 1 w i d t h L 1
with L 1 * = α 12 α 13 α 1 n , L * 1 = α 21 α 31 α n 1 and
L 1 = α 2 , 1 0 0 α 3 , 1 0 0 0 0 0 α n , 1 = diag ( { α j , 1 } | j = 2 , . . , n ) ,
so that, taking w = w 1 w 1 ˜ with w 1 R , w 1 ˜ R n 1 , it is possible to analogously parametrize w as:
w = κ 1 L * 1 L 1 1 with κ R { 0 } .
and considering (16) and (25), one has:
L * 1 L 1 1 = α 12 α 21 α 13 α 31 α 1 n α n 1 .
Putting together Equation (26) with Equation (25), Equation (22) follows. □

4. A Recursive General Solution Tree Graphs

In this Section, we derive a solution for a general tree graph T . The structure of the solution in such case is not in a closed form as for path and star graphs, but it is based on an inductive process over N , so that the resulting algorithmic solution is recursive. The proposed approach is inspired by the subdivision of the computation of the Perron vector into smaller problems of [21], and it perfectly fits the eigenvalue allocation algorithm in [15], so it can be integrated into the final algorithmic solutiono, thus solving the ambitious problem of eigenvalue allocation for average consensus problems.
Proposition 4.1. 
Consider a tree graph T , L w its corresponding Laplacian, and let = | N 1 | . T j , j = 1 , , represent the subtrees of T after remotion of node 1, each made of j = | T j | nodes, j = 1 , , and finally L w j R j × j the submatrix of L w being the Laplacian of T j .
If we denote by w = w 1 w ˜ 1 w ˜ a Perron vector for T , where w 1 R and each w ˜ j R j then it holds that:
  • Each w ˜ j is a Perron vector for each L w j .
  • The first component of each w ˜ j satisfies:
    w ˜ i 1 = w 1 α 1 i α i 1 .
Proof. 
Let N 1 = { i 1 , . . , i } , so that a convenient labeling allows to write the Laplacian matrix without loss of generality as:
L w = a 11 w i d t h α 1 i 1 0 w i d t h w i d t h α 1 i 0 α i 1 1 0 w i d t h L ¯ i 1 w i d t h w i d t h 0 i 1 × i w i d t h w i d t h w i d t h α i 1 0 w i d t h 0 i × i 1 w i d t h w i d t h L ¯ i
where L ¯ i = L T i + α i i 1 e 1 e 1 with L T i being the weighted Laplacian of T i , and a 11 = k = 1 α 1 i k . Now take a vector w partitioned conformably to the Laplacian (28), so that w = w 1 w ˜ 1 w ˜ with dimension of each w ˜ i according to those of the related subtree T and detailed in the above statement. By direct inspection of the relation w L w = 0 n one gets:
k = 1 α 1 i k w 1 k = 1 ( w ˜ k ) 1 = 0 w 1 α 1 i 0 0 + w ˜ 1 L ¯ 1 = 0 1 w 1 α 2 i 0 0 + w ˜ 2 L ¯ 2 = 0 2
and considering that w ˜ 1 L ¯ i = w ˜ 1 L T i + α i i 1 e 1 e 1 = w ˜ 1 L T i + α i i 1 w ˜ i 1 e 1 , Equation (29) can be rewritten as:
k = 1 α 1 i k w 1 k = 1 ( w ˜ k ) 1 = 0 , w 1 α 1 i + α i i 1 w ˜ 1 1 0 0 + w ˜ 1 L T 1 = 0 1 , w 1 α 2 i + α i i 2 w ˜ 2 1 0 0 + w ˜ 2 L T 2 = 0 2 ,
so that, assuming that each w ˜ i satisfies w ˜ i L T i = 0 , one has:
k = 1 α 1 i k w 1 k = 1 ( w ˜ k ) 1 = 0 , w 1 α 1 i + α i i 1 w ˜ 1 1 = 0 w 2 α 2 i + α i i 1 w ˜ 2 1 = 0
and hence the statement follows. □
The previous result provides the main tools for setting an algorithm which solves Problem 1 iteratively, though some Remarks are needed to fully describe the proposed approach. In the next Section we exploit the results gained in this Section and we deduce an algorithm for the solution of Problem 1 and 2, thus achieving our main goal for this paper.

5. A Distributed Algorithm for the General Solution

The results of Section 4 are useful to establish an interative procedure for the distributed computation of the Perron vector. In the following, we solve Problem 1 for a worked example, which is taken from the Simulation Results of [36]. Then we provide the general algorithm, based on the experience gained through the example.

5.1. An Illustrative Example

Consider the weighted tree graph in Figure 1, and Node 1 be the reference node for the algorithm execution. Considering the notation adopted in this paper, which is reported in Fig. Figure 1 (b), we take a vector w R 9 partitioned as:
w = w 1 w ˜ 2 w ˜ 4 w ˜ 7
where
w ˜ 2 L T 2 = 0 w ˜ 4 L T 4 = 0 w ˜ 7 L T 7 = 0
and the first component of each subvector must satisfy:
( w ˜ 2 ) 1 = w 1 α 12 α 21 ( w ˜ 4 ) 1 = w 1 α 14 α 41 ( w ˜ 7 ) 1 = w 1 α 17 α 71
Consider now that each subtree has the topology of either a path or a star graph, so that we can recur to the results of Section 3.1 and Section 3.2. Specifically, T 2 is either a path of two nodes or equiv. a star with one ray, T 4 is a path made of 3 nodes and finally T 7 is a three-node two-ray star, thus:
w ˜ 2 = κ 1 κ 1 α 23 α 32 w ˜ 4 = κ 2 κ 2 α 45 α 54 κ 2 α 45 α 56 α 54 α 65 w ˜ 7 = κ 3 κ 3 α 78 α 87 κ 3 α 79 α 97 .
Combining (35) with (33), one has that the structure of w is as follows:
w = w 1 w 1 α 12 α 21 w 1 α 12 α 23 α 21 α 32 w 1 α 14 α 41 w 1 α 14 α 41 α 45 α 54 w 1 α 14 α 41 α 45 α 56 α 54 α 65 w 1 α 17 α 71 w 1 α 17 α 71 α 78 α 87 w 1 α 17 α 71 α 79 α 97
for a nonzero w 1 R . We now seek the value of w 1 to match the normal condition w 1 N = 1 , which leads to:
1 = w 1 1 + α 12 α 21 1 + α 23 α 32 c T 2 + α 14 α 41 1 + α 45 α 54 + α 45 α 56 α 54 α 65 c T 4 + α 17 α 71 1 + α 78 α 87 + α 79 α 97 c T 7
so that (36) together with
w 1 = 1 + α 12 α 21 c T 2 + α 14 α 41 c T 4 + α 17 α 71 c T 7 1
solves Problem 1. It is worth noting here, in view of Problem 2 and a distributed implementation of the proposed approach, that coefficients c T i are analogous coefficients related to each subtree, and indeed we can write
w 1 = c T 1 where c T = 1 + α 12 α 21 c T 2 + α 14 α 41 c T 4 + α 17 α 71 c T 7 ,
and each c T i can be analogously be computed referring to the subgraphs of each T i , thus opening the path for a distributed implementation for its computation. Coefficients c T 7 are reminiscent of the normalizing terms in [21] called coupling factors, and in the following if this paper we use the same name.

5.2. Algorithm Description

Starting from the experience gained from the example, we now generalize the iterative algorithm to retrieve the value of the scaling factors that allow for reaching average consensus, as stated in Problem Statement 2.
Considering the solution computed in the example, and in particular Equation (37), (38), (39), it is straight to see that a first stage of the computation of the coupling factors should flow from the peripherical nodes to the inner ones. Indeed, each coupling factor c T i can be determined only following this order, since c T i = 1 for leaf nodes while it is unknown in advance for the other nodes.
The first stage of the algorithm is as follows. Each leaf node starts the computation with c = 1 , and sends it to its (only) neighbor v. At each time instant, each node v V should run the following algorithm:
  • if v has received less than | N v | 1 coefficients c w from its neighbors, node v must stay idle.
  • if v has received | N v | 1 coefficients c w from its neighbors, node v must compute
    c v = 1 + w N v α v w α w v c w ,
    and send it to the remaining neighbor.
The above computation is active until there is at least one node executing the second line, and it is easy to see that there exists an instant when a node, that we call it c in the following, receives all the coefficients c v from each one of its neighbors. When this condition happens, then node c triggers the second stage of the algorithm, as follows:
  • makes the computation as in Equation (39), namely:
    w c = 1 + v N c α c v α v c c T v ) 1 ,
    thus computing the component of the Perron vector corresponding to its own location w c .
  • sends the result back to its neighbors v (but ), v N c { } .
  • According to (8), node c sets its own scaling factor ζ c equal to ζ c = N w c .
Then, this latter procedure propagates from node c back to the leaf nodes. At this stage, when any node, say node j, receives w from one of its neighbors , then:
  • computes w j = α j α j w , namely the component of the Perron vector corresponding to its own location according to the structure of Equation (36),
  • sends the result back to its neighbors v, v N j ,
  • According to (8), node j sets its scaling factor ζ j equal to ζ j = N w j .
The above procedure prosecutes until there are more peripherical neighbor nodes, thus ending when it reaches the leaf nodes.
The above procedure allows to compute the scaling factors for average consensus in a distributed fashion. A final issue of the above procedure is the time instant when the leaf node should start running stage one. There are several possible strategies according to the specific setting, described in the foillowing. However, it is not an issue for the algorithm execution, but only for an estimation in advance of the time needed for execution.
If all the leaf nodes start synchronously at time t ¯ , then the algorithm lasts until t ¯ + 2 R , where R is the radius of the graph, In this case, the ending node is the center of the tree. However, if the leaves are not synchronized and each node has its own starting time, then the algorithm still have a finite execution with a correct solution, though it cannot be computed in advancethe execution time nor the location of the last node(s) running the algorithm.
Finally, there are scenarios where one leader node is present and it can trigger the algorithm. In this latter case, our reference scenario is the Laplacian allocation algorithm in [15], where weights are set in a distributed fashion, starting from the leader node to leaf nodes. However, several different frameworks are possible, also adopting a virtual leader. In this latter setting, where a leader triggers the algorithm, the total algorithm time execution is equal to e + 2 R , where e represents the eccentricity of the leader within the network graph.

6. Simulation Results

In this Section, we show the results of the proposed approach in the consensus network described by the graph in Figure 2, where a parameter ε denotes an uncertainty term and it is a variation of the edge value with respect to the nominal one equal to 1.
Our reference scenario for this simulation is the WSN Implementation of the Average Consensus Algorithm as described in [38] Figure 3 , where the architecture of a wireless sensor node is sketched, allowing the scaling of the initial measurements in the preliminary initialization step (Stage I and II).
We assume to have a set of 9 sensor nodes connected as depicted in Figure 2, each holding a measured quantity, the initial values are set equal to
x 0 = 33 12 12 4 17 6 48 2 10 ]
with average value equal to i = 1 9 x i ( 0 ) / 9 = 4.667 .
The first simulation shows the results of the evolution of the system when the uncertainty parameter ε = 0.05 . The evolution of the system is shown in Figure 3, solid lines.
A first notable point is the magnitude of the drift of the consensus value from the initial average value. Indeed, in such a case, the consensus value turns to 5.9 , which is 26 , 4 % away from the nominal average value, even if the variation was only of magnitude 5 % .
The above phenomenon is even more evident for values of ε = 0.2 . Indeed, in such case the asymptotic value is equal to 10.02 , so that it is more than 100 % far from the nominal one, and it is useless even as a rough estimation of the average value.
The reason for this phenomenon is related to the variation of the Perron vector. Even if each component has limited variation, the overall weighted sum results so far from the average. The Perron vector is equal to
w = 1.5 1 0.7 1 0.7 0.7 1.5 1 1 ]
so that a suitable rescaling of the initial values according to Equation (8) provides:
χ ( 0 ) = 2 12 18 4 25.5 9 32 2 10 ]
which restores the original consensus value to the average of x ( 0 ) (dashed lines).

7. Conclusions

In this paper, a distributed algorithm is derived for the computation of suitable scaling factors which allow reaching average consensus in unbalanced tree networks. The general algorithm is iterative and based on local data, and it is appropriate to be run as a preliminary routine. Simulation results show that the proposed algorithm can be effectively integrated with other set-up routines, and it gathers beneficial effects on the precision of the computed average value. Further research is oriented in different directions, e.g. the extension of the proposed approach to more general and time-varying graphs, considering more general dynamics of a single agent allowing for vehicle and mobile robot network applications, and testing the algorithm in experimental stages of WSN applications.

References

  1. Bullo, F. Lectures on Network Systems, 1 ed.; 2022.
  2. Olfati-Saber, R.; Fax, A.J.; Murray, R. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef]
  3. Rezaee, H.; Abdollahi, F. Average consensus over high-order multiagent systems. IEEE Transactions on Automatic Control 2015, 60, 3047–3052. [Google Scholar] [CrossRef]
  4. Kia, S.S.; Van Scoy, B.; Cortes, J.; Freeman, R.A.; Lynch, K.M.; Martinez, S. Tutorial on dynamic average consensus: The problem, its applications, and the algorithms. IEEE Control Systems Magazine 2019, 39, 40–72. [Google Scholar] [CrossRef]
  5. Garin, F.; Schenato, L. A survey on distributed estimation and control applications using linear consensus algorithms. In Networked control systems; Springer, 2010; pp. 75–107.
  6. Patterson, S.; Bamieh, B.; El Abbadi, A. Convergence rates of distributed average consensus with stochastic link failures. IEEE Transactions on Automatic Control 2010, 55, 880–892. [Google Scholar] [CrossRef]
  7. Cai, K.; Ishii, H. Average consensus on general digraphs. In Proceedings of the 2011 50th IEEE Conference on Decision and Control and European Control Conference. IEEE; 2011; pp. 1956–1961. [Google Scholar]
  8. Guyeux, C.; Haddad, M.; Hakem, M.; Lagacherie, M. Efficient distributed average consensus in wireless sensor networks. Computer Communications 2020, 150, 115–121. [Google Scholar] [CrossRef]
  9. Sun, S.; Chen, F.; Ren, W. Distributed average tracking in weight-unbalanced directed networks. IEEE Transactions on Automatic Control 2020, 66, 4436–4443. [Google Scholar] [CrossRef]
  10. Sen, A.; Sahoo, S.R.; Kothari, M. Distributed average tracking with incomplete measurement under a weight-unbalanced digraph. IEEE Transactions on Automatic Control 2022, 67, 6025–6037. [Google Scholar] [CrossRef]
  11. Du, M.; Meng, D.; Wu, Z.G. Distributed averaging problems over directed signed networks. IEEE Transactions on Control of Network Systems 2021, 8, 1442–1453. [Google Scholar] [CrossRef]
  12. Shafi, S.Y.; Arcak, M.; El Ghaoui, L. Graph weight allocation to meet Laplacian spectral constraints. IEEE Transactions on Automatic Control 2011, 57, 1872–1877. [Google Scholar] [CrossRef]
  13. Hao, H.; Barooah, P. Improving convergence rate of distributed consensus through asymmetric weights. In Proceedings of the 2012 American Control Conference (ACC). IEEE; 2012; pp. 787–792. [Google Scholar]
  14. Gao, L.; Zhao, J.; Di, Z.; Wang, D. Asymmetry between odd and even node weight in complex networks. Physica A: Statistical Mechanics and its Applications 2007, 376, 687–691. [Google Scholar] [CrossRef]
  15. Parlangeli, G. Laplacian Eigenvalue Allocation Through Asymmetric Weights in Acyclic Leader-Follower Networks. IEEE Access 2023, 11, 126409–126419. [Google Scholar] [CrossRef]
  16. Parlangeli, G. Prescribed-time average consensus through data-driven leader motion. IEEE Access 2024. [Google Scholar] [CrossRef]
  17. Valcher, M.E.; Parlangeli, G. On the effects of communication failures in a multi-agent consensus network. In Proceedings of the 2019 23rd International Conference on System Theory, Control and Computing (ICSTCC). IEEE; 2019; pp. 709–720. [Google Scholar]
  18. Parlangeli, G. A Supervisory Algorithm Against Intermittent and Temporary Faults in Consensus-Based Networks. IEEE Access 2020, 8, 98775–98786. [Google Scholar] [CrossRef]
  19. Dibaji, S.M.; Pirani, M.; Flamholz, D.B.; Annaswamy, A.M.; Johansson, K.H.; Chakrabortty, A. A systems and control perspective of CPS security. Annual reviews in control 2019, 47, 394–411. [Google Scholar] [CrossRef]
  20. Kenyeres, M.; Kenyeres, J. Average Consensus with Perron Matrix for Alleviating Inaccurate Sensor Readings Caused by Gaussian Noise in Wireless Sensor Networks. In Proceedings of the Software Engineering and Algorithms: Proceedings of 10th Computer Science On-line Conference 2021, Vol.; pp. 12021391–405.
  21. Meyer, C.D. Uncoupling the Perron eigenvector problem. Linear Algebra and its applications 1989, 114, 69–94. [Google Scholar] [CrossRef]
  22. Liu, Z.; Guo, L. Synchronization of multi-agent systems without connectivity assumptions. Automatica 2009, 45, 2744–2753. [Google Scholar] [CrossRef]
  23. Sardellitti, S.; Giona, M.; Barbarossa, S. Fast distributed average consensus algorithms based on advection-diffusion processes. IEEE Transactions on Signal Processing 2009, 58, 826–842. [Google Scholar] [CrossRef]
  24. Chen, Y.; Tron, R.; Terzis, A.; Vidal, R. Corrective consensus with asymmetric wireless links. In Proceedings of the 2011 50th IEEE Conference on Decision and Control and European Control Conference. IEEE; 2011; pp. 6660–6665. [Google Scholar]
  25. Song, Z.; Taylor, D. Asymmetric Coupling Optimizes Interconnected Consensus Systems. arXiv preprint arXiv:2106.13127, arXiv:2106.13127 2021.
  26. Parlangeli, G.; Valcher, M.E. Leader-controlled protocols to accelerate convergence in consensus networks. IEEE Transactions on Automatic Control 2018, 63, 3191–3205. [Google Scholar] [CrossRef]
  27. Parlangeli, G.; Valcher, M.E. Accelerating consensus in high-order leader-follower networks. IEEE Control Systems Letters 2018, 2, 381–386. [Google Scholar] [CrossRef]
  28. Mirzaev, I.; Gunawardena, J. Laplacian dynamics on general graphs. Bulletin of mathematical biology 2013, 75, 2118–2149. [Google Scholar] [CrossRef] [PubMed]
  29. Dietzenbacher, E. Aggregation in multisector models: using the Perron vector. Economic Systems Research 1992, 4, 3–24. [Google Scholar] [CrossRef]
  30. Schenato, L.; Fiorentin, F. Average TimeSynch: A consensus-based protocol for clock synchronization in wireless sensor networks. Automatica 2011, 47, 1878–1886. [Google Scholar] [CrossRef]
  31. Shi, F.; Yang, S.X.; Mukherjee, M.; Jiang, H.; da Costa, D.B.; Wong, W.K. Parameter-sharing-based average-consensus time synchronization in IoT networks. IEEE Internet of Things Journal 2022, 10, 8215–8227. [Google Scholar] [CrossRef]
  32. Wu, J.; Li, X. Collective synchronization of Kuramoto-oscillator networks. IEEE Circuits and Systems Magazine 2020, 20, 46–67. [Google Scholar] [CrossRef]
  33. Duan, Y.; He, X.; Zhao, Y. Distributed algorithm based on consensus control strategy for dynamic economic dispatch problem. International Journal of Electrical Power & Energy Systems 2021, 129, 106833. [Google Scholar]
  34. Macii, D.; Rinaldi, S. Time synchronization for smart grids applications: Requirements and uncertainty issues. IEEE Instrumentation & Measurement Magazine 2022, 25, 11–18. [Google Scholar]
  35. Bullo, F.; Cortés, J.; Martínez, S. Distributed Control of Robotic Networks; Princeton University Press, 2009. Applied Mathematics Series.
  36. Parlangeli, G. A distributed algorithm for the assignment of the Laplacian spectrum for path graphs. Mathematics 2023, 11, 2359. [Google Scholar] [CrossRef]
  37. Usmani, R.A. Inversion of a tridiagonal Jacobi matrix. Linear Algebra and its Applications 1994, 212, 413–414. [Google Scholar] [CrossRef]
  38. Kenyeres, J.; Kenyeres, M.; Rupp, M.; Farkas, P. WSN implementation of the average consensus algorithm. In Proceedings of the 17th European Wireless 2011-Sustainable Wireless Technologies. VDE; 2011; pp. 1–8. [Google Scholar]
Figure 1. Worked Example from [36] : (a) Unbalanced tree graph (b) Subdivision of the graph according to the notation.
Figure 1. Worked Example from [36] : (a) Unbalanced tree graph (b) Subdivision of the graph according to the notation.
Preprints 115465 g001
Figure 2. Simulation results: perturbed consensus network topology adopted in the simulations.
Figure 2. Simulation results: perturbed consensus network topology adopted in the simulations.
Preprints 115465 g002
Figure 3. Simulation results: perturbed consensus network. Case ε = 0.05 . Solid lines plots show uncompensated trajectories, dashed lines show the evolution with scaled initial conditions.
Figure 3. Simulation results: perturbed consensus network. Case ε = 0.05 . Solid lines plots show uncompensated trajectories, dashed lines show the evolution with scaled initial conditions.
Preprints 115465 g003
Figure 4. Simulation results: perturbed consensus network. Case ε = 0.2 . Solid lines plots show uncompensated trajectories, dashed lines show the evolution with scaled initial conditions.
Figure 4. Simulation results: perturbed consensus network. Case ε = 0.2 . Solid lines plots show uncompensated trajectories, dashed lines show the evolution with scaled initial conditions.
Preprints 115465 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated