ANALYZING CONTROL PROPERTIES OF MULTI-AGENT LINEAR SYSTEMS

The networked multi-agent systems that they are interconnected via communication channels have great applicability in multiple areas, such as power grids, bioinformatics, sensor networks, vehicles, robotics and neuroscience, for example. Consequently, they have been widely studied by scientists in different fields in particular in the field of control theory. Recently has taken interest to analyze the control properties as consensus controllability and observability of multi-agent dynamical systems motivated by the fact that the architecture of communication network in engineering multi-agent systems is usually adjustable. In this paper, we analyze how to improve the control properties in the case of multiagent linear time-invariant dynamical systems.

In recent years has grown the interest in the study of control multi-agent systems, as well as the increasing interest in distributed control and coordination of networks consisting of multiple autonomous agents. It is due to that they appear in different areas, and there are an amount of bibliography as [Saber and Murray, 2004], [Wang, Cheng, and Hu, 2008], [Xie and Wang, 2006], [Rahmani, Ji, Mesbahi, and Egerstedt, 2009].
The control of linear dynamical systems is a strategy that the brain uses to control its own intrinsic dynam-ics. The brain structure can be modelled as a networked system that is expressly interesting system to control because of the role of the underlying architecture, which predisposes some components to particular control motions. The concept of brain cognitive control defined by neuroscientists is related to the mathematical concept of control defined by physicists, mathematicians, and engineers, where the state of a complex system can be adjusted by a particular input. Recent Advances in Neuroscience show that brain cognitive function is driven by dynamic interactions between large-scale neural circuits or networks, enabling behaviour. The use of tools from control and network theories permit a mechanistic description for how the brain moves between cognitive states drawn from the network organization of white matter that found in the deepest tissues of the brain, [Gu et Al., 2015].
It has been shown that controllability analysis of the neural network is key when it comes to the mechanistic explanation of how the brain operates in different cognitive states. According to the different point of view of controllability, the average controllability describes the role of a brain network's node in driving the system to many easily reachable states. On the other hand, the modal controllability is employed to identify the states that are difficult to control. It has recently been seen that the exact controllability permit us to determine the areas of the brain or nodes in the connectivity graph (structural or functional) that can act as drivers and move the system (brain) into specific states of action [Meyer-Bäse et Al., 2020].
In this paper the controllability and observability character of multiagent systems consisting of k agents having identical linear dynamic mode, with dynamics defined as:ẋ are analyzed. In particular we consider the case where the multiagent systems consisting of k agents having identical linear dynamic mode, with dynamics.

Preliminaries
The topology of the system is defined by means an indirect graph. Graph models are actually common used in representations of networks We consider a graph G = (V, E) of order k with the set of vertices V = {1, . . . , k} and the set of edges Given an edge (i, j) i is called the parent node and j is called the child node and j is in the neighbor of i, concretely we define the neighbor of i and we denote it by N i to the set The graph is called undirected if verifies that (i, j) ∈ E if and only if (j, i) ∈ E. The graph is called connected if there exists a path between any two vertices, otherwise is called disconnected. Associated to the graph we can consider the Laplacian matrix of the graph defined in the following manner Remark 2.1. The following properties are verified.
i) If the graph is undirected then the matrix L is symmetric, then there exist an orthogonal matrix P such that P LP t = D. ii) If the graph is undirected then 0 is an eigenvalue of L and (1, . . . , 1) t is the associated eigenvector. iii) If the graph is undirected and connected the eigenvalue 0 is simple.
For more details about graph theory see [West, 2007]. About matrices, we need to remember Kronecker product of matrices because it will be useful in our study.
Given a couple o matrices A = (a ij ) ∈ M n×m (C) and B = (b ij ) ∈ M p×q (C), remember that the Kronecker product is defined as follows.
Definition 2.1. Let A = (a i j ) ∈ M n×m (C) and B ∈ M p×q (C) be two matrices, the Kronecker product of A and B, write A ⊗ B, is the matrix Kronecker product verifies the following properties Corollary 2.1. The vector 1 k ⊗ v is an eigenvector corresponding to the zero eignevalue of L ⊗ I n . Proof.
Consequently, if {e 1 , . . . , e n } is a basis for C n , then 1 k ⊗ e i is a basis for the nullspace of L ⊗ I n .
Associated to the Kronecker product, can be defined the vectorizing operator that transforms any matrix A into a column vector, by placing the columns in the matrix one after another, Obviously, vec is an isomorphism.
See [Lancaster and Tismenetsky, 1985] for more information and properties.

Control Properties
Definition 3.1. The dynamical systemẋ = Ax + Bu is said to be controllable if for every initial condition x(0) and every vector x 1 ∈ R n , there exist a finite time t 1 and control This definition requires only that any initial state x(0) can be steered to any final state x 1 at time t 1 . However, the trajectory of the dynamical system between 0 and t 1 is not specified. Furthermore, there is no constraints posed on the control vector u(t) and the state vector x(t).
It is easier to compute the controllability using the following matrix called controllability matrix, thanks to the following well-known result.
Theorem 3.1. The dynamical systemẋ = Ax + Bu is controllable if and only if rank C = n.
As we says, controllability of the dynamical systeṁ x = Ax + Bu implies that each initial state can be steered to 0 on a finite time-interval. If only is required that this to happen asymptotically for t → ∞, we have the following concept.
Definition 3.2. The systemẋ = Ax + Bu is called stabilizable if for each initial state x(0) ∈ R n there exists a (piece-wise continuous) control input u : [0, ∞) −→ R m such that the state-response with x(0) verifies lim t→∞ x(t) = 0.
Remark 3.1. i) All controllable systems are stabilizable but the converse is false. ii) If the matrix A in the systemẋ = Ax + Bu is Hurwitz then, the system is stabilizable.

It is important the following result
Theorem 3.2. The systemẋ = Ax + Bu is stabilizable if and only if there exists some feedback F such thatẋ = (A − BF )x is asymptotically stable.
A dual concept of controllability is the observability.
Definition 3.3. The dynamical systemẋ = Ax + Bu, y = Cx is said to be observable at t 0 if there exist a finite time t 1 > t 0 such that for any vector x 0 ∈ R n , at time t 0 the knowledge of the control u(t) ∈ R m , t ∈ [t 0 , t 1 ], and the output y t over the time [t 0 , t 1 ] suffices to determine the state x 0 .
It is easier to compute the observability using the following matrix called observability matrix, thanks to the following wellknown result. (For more information about control properties, see [Antoulas, 2013], [Chen, 1970], for example)

Controllability and observability of multiagent
systems Writing . . .
Following this notation we can describe the multisystem as a system:Ẋ Clearly, -this system is controllable if and only if each subsystem is controllable, and, in this case, there exist a feedback in which we obtain the desired solution.
-this system is observable if and only if each subsystem is observable, and, in this case, there exist a output injection in which we obtain the desired solution.

Consensus
Now, we consider the case where the multiagent systems consisting of k agents having identical linear dynamic mode, with dynamics and we are interested in take the output of the system to a reference value and keep it there, we can ensure that if the system is controllable.
Roughly speaking, we can define the consensus as a collection of processes such that each process starts with an initial value, where each one is supposed to output the same value and there is a validity condition that relates outputs to inputs. More concretely, the consensus problem is a canonical problem that appears in the coordination of multi-agent systems. The objective is that Given initial values (scalar or vector) of agents, establish conditions under which through local interactions and computations, agents asymptotically agree upon a common value, that is to say: to reach a consensus.
Definition 4.1. Consider the system 1. We say that the consensus is achieved using local information if there is a state feedback u i = K j∈Ni (x i − x j ) and an estimator W C j∈Ni (x i − x j ), such that The closed-loop system obtained under these feedback and output injection is as followṡ where X ,Ẋ , A, B, C are as before and Following this notation we can conclude the following.
Corollary 4.1. The closed-loop system can be described in terms of the matrices A, B, C, the feedback K, the output injection W and the eigenvalues of L.
Proof. Following properties of Kronecker product we have that and calling X = (P ⊗ I n )X , we havė X =((I k ⊗ A) + (I k ⊗ (BK + W C))(D ⊗ I n )) X . Equivalently, . . .
Corollary 4.2. The system 1 is consensus stabilizable if and only if the systems A + λ i (BK + W C) are stable by means the same K and W .

Controllability of Multi-Agent Systems with External Feedback
Let us consider a group of k agents having identical dynamical mode. The dynamic of each agent is given by the linear dynamical systems as 1 with external control inputs, that is to say: Given the following protocol as 4.1 where K is the feedback gain matrix and W the output injection, and defining In the particular case where all agents on the multiagent system, have an identical linear dynamic mode, we have the following proposition Proposition 4.2. With these notations the system can be described aṡ X =((I k ⊗A)+(I k ⊗(BK+W C))(D⊗I n )) X +EU ext (t).
The expression of the multi-agent system as a linear system permit us to adapt the structural controllability concept given for linear dynamical systems [Lin, 1974], to the multi-agent system.
Definition 4.2. The multi-agent system 9 is said to be structurally controllable if one can change the non-zero entries of the matrix A for some particular values, near of the initial ones, from R such that system 9 is controllable in the classical sense.
It is easy to prove that Proposition 4.3. The multi-agent system 9 is structurally controllable if and only if, rank ( A(ε) + λ i (BK + W C) − αI n E(ε) = n, ∀ 0 ≤ i ≤ k and for some small parameters ε i = 0 where A(ε) and E(ε) are matrices depending on parameters ε = (ε i ) (one parameter for each nonzero entry of matrices A and E).
Suppose now, that the system 5 is not controllable but it is possible to introduce some external control. Then, we ask for the minimal number of controls that are necessary to make the system controllable. This number is called exact controllability.
Definition 4.3. The exact controllability of the system 5 is the minimal number of columns of the matrix E making the system 9 controllable.

Conclusion
In this paper, we have analyzed the consensus controllability and observability properties in the case of multiagent linear systems having all agents the same dynamics.