Open Access

Mean square containment control problems of multi-agent systems under Markov switching topologies

Advances in Difference Equations20152015:157

https://doi.org/10.1186/s13662-015-0437-3

Received: 23 October 2014

Accepted: 3 March 2015

Published: 16 May 2015

Abstract

The paper investigates containment control for multi-agent systems under Markov switching topologies. By using graph theory and the tools of stochastic analysis, sufficient conditions of mean square containment control problems are derived for the second-order multi-agent systems. Then the obtained results are further extended to high-order multi-agent systems.

Keywords

multi-agent systems containment control mean square consensus

1 Introduction

Recently, cooperative control for multi-agent systems has attracted increasing attention, due to its many applications in different fields. As is well known, consensus is the basic problem of cooperative control for multi-agent systems, which means that every agent tends to the same value in a team. Moreover, the topic of consensus problems with multiple leaders is interesting, which is called containment control for multi-agent systems.

Containment control for multi-agent systems means that all followers are driven into the convex hull generated by the leaders. For example, when some robots are used to carry poisonous materials, since they do not pollute other places, a group of robots are needed to drive them in the designed route, which is called leaders. At present, many results about containment control have been obtained [112]. In [1], dynamic containment algorithms based on observers for the high-order continuous-time multi-agent systems were proposed under the fixed topology. Then the results of continuous-time multi-agent systems were extended to that of the discrete-time multi-agent systems. In [2], the containment control problem of double-integrator dynamics with multiple leaders was discussed, where velocities of leaders are unavailable. In [3], the output feedback algorithm and state feedback algorithm were proposed for solving the containment control problem of the discrete-time multi-agent systems, in which time delay is considered in communication networks of multi-agent systems. By using the Lyapunov functional method and the tools of the linear matrix inequality, containment control problems of second-order multi-agent with time delays were investigated, where the stationary leaders and dynamic leaders were considered in [4], respectively. In [5], containment algorithms based on the sampled data for the second-order multi-agent systems were given, where the necessary and sufficient conditions of multi-agent systems were derived.

There are deterministic topologies in the above literature. In fact, communication topologies of networks usually change randomly. By using the graph theory and knowledge of stochastic analysis, mean square consensus of the discrete-time multi-agent systems was discussed under Markov switching topologies in [6]. Then the authors in [7] extended the results in [6] to leader-following consensus of discrete-time multi-agent systems under Markov switching topologies. Under randomly switching topologies, consensus conditions of continuous-time and discrete-time high-order multi-agent systems were given, respectively, where the random link failures between agents were discussed in [8]. In [9], the convergence speed of the first-order discrete-time multi-agent systems was studied. Then the authors in [10] extended the results in [9] to that of the second-order and high-order multi-agent system, respectively. Moreover, under random switching topologies, target containment control for the second-order multi-agent systems was discussed in [11], where switching topologies were driven by a Markov process. In addition, mean square containment control problems of the first-order and second-order multi-agent systems with communication noises was investigated in [12].

Inspired by the results in [112], this paper further investigates the containment control for multi-agent systems under Markov switching topologies. In this paper, containment algorithms for continuous-time and discrete-time multi-agent systems are given, respectively. By using the graph theory and theory of stochastic analysis, sufficient conditions of mean square containment control for multi-agent systems are derived. Then we extend the results of the second-order multi-agent systems to high-order multi-agent systems.

2 Mean square containment control for discrete-time multi-agent systems

Before we give the main results, basic graph theory is introduced. Suppose that there are N agents in the topology. \(\mathcal{G}=\{\mathcal{V},\mathcal{A},\mathcal{E}\}\) denotes the graph corresponding to the communication topology, where \(\mathcal {V}=\{1,\ldots,N\}\) is the set of nodes, \(\mathcal {A}=[a_{ij}]_{N\times N}\) is the adjacency matrix. If \((i,j)\in \mathcal{E}\) holds, then \(a_{ij}=1\) and otherwise \(a_{ij}=0\). The Laplacian matrix is defined as \(L=[l_{ij}]_{N\times N}\), where \(l_{ij}=-a_{ij}\) with \(i\neq j\) and \(l_{ii}=\sum_{j=1}^{N}a_{ij}\) with \(i=j\).

Consider the ith follower’s dynamic of the second-order multi-agent, described as follows:
$$ \begin{aligned} &x_{i}(k+1)= x_{i}(k)+v_{i}(k)T, \\ &v_{i}(k+1)= v_{i}(k)+u_{i}(k)T,\quad i=1, \ldots, M, \end{aligned} $$
(1)
where \(x_{i}(k)\), \(v_{i}(k)\), \(u_{i}(k)\) represent position, velocity, and input control of the ith agent, respectively.
The leader’s dynamic is denoted as follows:
$$ \begin{aligned} &x_{i}(k+1) = x_{i}(k)+v_{i}(k)T, \\ &v_{i}(k+1) = v_{i}(k),\quad i=M+1,\ldots, N. \end{aligned} $$
(2)
The containment algorithm is proposed as follows:
$$ u_{i}(k)=\sum_{j\in F\cup L} a_{ij}\bigl(x_{j}(k)-x_{i}(k)\bigr),\quad i=1, \ldots, M, $$
(3)
where \(F=\{1,\ldots, M\}\) denotes the set of followers and \(L=\{M+1,\ldots, N\}\) is the set of leaders.

Definition

([6, 12])

Under Markov switching topologies, the mean square containment control problem of the multi-agent system is solved for any initial distribution, if the followers are driven into the convex hull generated by the leader’s states, \(\lim_{k\rightarrow \infty}E(x_{i}(k)-x_{i}^{\ast})^{2}=0\) for \(i=1,\ldots, M\), where \(x_{i}^{\ast}\in \mathit{co}_{L}=\{\sum_{j=M+1}^{N}\alpha_{j}x_{j}(k),j=M+1,\ldots,N, \alpha_{j}\geq0,\sum_{j=M+1}^{N}\alpha_{j}=1\}\).

\(D=[d_{ij}]_{N\times N}\) is a row stochastic matrix, where \(d_{ij}>0\) if \((i,j)\in \mathcal{E}\) and otherwise \(d_{ij}=0\). Then D is divided into the following form:
$$ D=\left [ \begin{array}{@{}c@{\quad}c@{}} D_{1}& D_{2} \\ 0 & I_{N-M} \end{array} \right ], $$
(4)
where \(D_{1}\in R^{M\times M}\), \(D_{2}\in R^{M\times(N-M)}\), \(I_{N-M}\) is an identity matrix with dimension \(N-M\). Set \(\eta_{i}(k)=[x_{i}^{T}(k)\ v_{i}^{T}(k)]^{T}\), \(\xi_{i}(k)=\sum_{j=1}^{M}a_{ij}(\eta_{i}(k)-\eta_{j}(k))\) for \(i=1,\ldots,M\), \(\eta_{F}(k)=[\eta_{1}^{T}(k)\cdots \eta_{M}^{T}(k)]^{T}\), \(\eta_{L}(k)=[\eta_{M+1}^{T}(k)\cdots \eta_{N}^{T}(k)]^{T}\), \(\xi(k)=[\xi_{1}^{T}(k)\cdots \xi_{M}^{T}(k)]^{T}\). Then we have
$$ \xi(k)=\bigl((I_{M}-D_{1})\otimes I_{2}\bigr)\eta_{F}(k)-(D_{2}\otimes I_{2})\eta_{L}(k). $$
(5)
Following the transform methods in [1, 3], we have
$$\begin{aligned} \eta_{F}(k+1) =& \left (I_{M}\otimes \left [ \begin{array}{@{}c@{\quad}c@{}} 1&T \\ 0 & 1 \end{array} \right ] \right )\eta_{F}(k)-(I_{M}-D_{1}) \otimes \left [ \begin{array}{@{}c@{\quad}c@{}}0 & 0 \\ T& T \end{array} \right ] \eta_{F}(k) \\ &{}+D_{2}\otimes \left [ \begin{array}{@{}c@{\quad}c@{}}0 &0 \\ T& T \end{array} \right ] \eta_{L}(k). \end{aligned}$$
(6)
Then (2) can be rewritten as follows:
$$ \eta_{L}(k+1)=\left (I_{N-M}\otimes \left [ \begin{array}{@{}c@{\quad}c@{}} 1 & T \\ 0 & 1 \end{array} \right ] \right )\eta_{L}(k). $$
(7)
Noting (5), we have
$$ \eta_{F}(k)= \bigl((I_{M}-D_{1})^{-1} \otimes I_{2} \bigr)\xi(k)+ \bigl((I_{M}-D_{1})^{-1}D_{2} \otimes I_{2} \bigr)\eta_{L}(k). $$
(8)
Taking together (6) with (8), we have
$$ \xi(k+1)=\Omega\xi(k), $$
(9)
where \(\Omega=I_{M}\otimes \bigl[{\scriptsize\begin{matrix} 1 & T\cr 0 & 1\end{matrix}} \bigr]-(I_{M}-D_{1})\otimes \bigl[ {\scriptsize\begin{matrix}0 &0 \cr T & T \end{matrix}} \bigr]\).
Observe
$$ (I_{M}-D_{1})\otimes \left [ \begin{array}{@{}c@{\quad}c@{}}0 & 0\\ T& T \end{array} \right ]=\left [ \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}}0& 0 & \cdots& 0& 0 \\ 1-d_{11}T & 1-d_{11}T & \cdots& -d_{1M}T & -d_{1M}T \\ \vdots& \vdots&\ddots&\vdots& \vdots \\ 0 & 0 & \cdots& 0 & 0 \\ -d_{M1}T & -d_{M1}T & \cdots& 1-d_{MM}T & 1-d_{MM}T \end{array} \right ], $$
which implies
$$ \Omega=\bar{A}+\bar{B}T, $$
where \(\bar{A}=I_{M}\otimes I_{2}\),
$$\bar{B}=\left [ \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}}0 & 1 & \cdots&0 &0 \\ -(1-d_{11}) & -(1-d_{11})& \cdots& -d_{1M} & -d_{1M} \\ \vdots& \vdots& \ddots& \vdots& \vdots \\ 0 & 0 &\cdots& 0 & 1 \\ -d_{M1}& -d_{M1}&\cdots& -(1-d_{MM})& -(1-d_{MM}) \end{array} \right ]. $$
Then there exists an orthogonal matrix W, such that \(W^{T}\bar {B}W=\tilde{B}\), where
$$ \tilde{B}=\left [ \begin{array}{@{}c@{\quad}c@{}} -(I_{M}-D_{1}) & -(I_{M}-D_{1}) \\ I_{M}& 0 \end{array} \right ]. $$
(10)
Setting \(\xi(k)=W\tilde{\xi}(k)\), (9) turns into the form
$$ \tilde{\xi}(k+1)=\tilde{\Omega}\tilde{\xi}(k), $$
(11)
where \(\tilde{\Omega}=\bar{A}+\tilde{B}T\).

Assumption 1

Assume that there exists a path from the leaders to each follower.

Theorem 1

Suppose that Assumption  1 holds. In the fixed directed topology, under the algorithm (3), the containment control problem for system (1) can be solved, that is, follower (1) is driven into the leaders’ sets (2).

Proof

Since there exists a path from the leaders to each follower, based on Lemma 2 in [1], we know that all eigenvalues of \(D_{1}\) are in the open unit disk. We know the eigenvalues of \(\bar{A}\) are equal to 1 with geometric multiplicity 2M. The characteristic polynomial of \(\tilde{B}\) is denoted as follows:
$$ \lambda_{i}^{2}+\mu_{i} \lambda+ \mu_{i}=0,\quad i=1,\ldots, M, $$
(12)
where \(\lambda_{i}\) is the eigenvalue of \(\tilde{B}\), \(\mu_{i}\) is the eigenvalue of \(I_{M}-D_{1}\). From Lemma 2 in [1], we know that \(\mu_{i}\) has the positive real part. Then we see that \(\lambda_{i}\) is the negative real part. Following the methods in [6], we see that the eigenvalues of \(\bar{A}\) are perturbed by small negative real parts of \(\lambda_{i}T\). For small T, \(\rho(\Omega)<1\). \(\lim_{k\rightarrow\infty}\tilde{\xi}(k)=0\) is derived, which implies \(\lim_{k\rightarrow\infty}\eta_{F}(k)=-\lim_{k\rightarrow\infty }((I_{M}-D_{1})^{-1}D_{2}\otimes I_{2})\eta_{L}(k)\). Then the containment control problem for system (1) can be solved. □
In the following section, we consider topologies driven by a Markov process. Under Markov switching topologies, (11) can be rewritten as follows:
$$ \tilde{\xi}(k+1)=\tilde{\Omega}_{\sigma(k)}\tilde{\xi}(k), $$
(13)
where \(\sigma(k)=[1,2,\ldots,s]\) is the Markov switching process.

Assumption 2

Under Markov switching topologies, for each follower, there exists at least one path form one leader to that follower in the union of the topologies \(\{\mathcal{G}_{1},\ldots, \mathcal{G}_{s}\}\).

Lemma 1

Under Assumption  2, all the eigenvalues of \(I_{M}-D_{1}\) have positive real parts.

Proof

According to Lemma 2 in [1], we obtain Lemma 1 immediately. □

Theorem 2

Assume that Assumption  2 holds. Under the containment algorithm (3), system (1) can solve the containment control problem in the mean square sense.

Proof

Containment control problem of system (1) can be converted to the mean square consensus of the systems (13). According to Theorem 3 in [6], system (13) reaches mean square consensus if and only if \(\rho(\Lambda)<1\), where \(\Lambda= (\pi^{T}\otimes I_{4M^{2}})\operatorname{diag}(\tilde{\Omega}_{i}\otimes\tilde{\Omega}_{i})\) with \(\pi=[\pi_{ij}]_{M\times M}\) being the transition probability matrix. Then we have
$$\begin{aligned} \Lambda =& \bigl(\pi^{T}\otimes I_{4M^{2}}\bigr) \operatorname{diag}(\tilde{\Omega}_{i}\otimes\tilde{\Omega }_{i}) \\ =& \bigl(\pi^{T}\otimes I_{4M^{2}}\bigr)\operatorname{diag} \bigl((\bar{A}+\tilde{B}T)\otimes(\bar {A}+\tilde{B}T)\bigr) \\ =&\bigl(\pi^{T}\otimes I_{4M^{2}}\bigr)+T\bigl( \pi^{T}\otimes I_{4M^{2}}\bigr)\operatorname{diag}(\tilde {B}_{i}\otimes I_{2M}+I_{2M}\otimes \tilde{B}_{i}) \\ &{}+T^{2}\bigl(\pi^{T}\otimes I_{4M^{2}}\bigr) \operatorname{diag}(\tilde{B}_{i}\otimes \tilde{B}_{i}). \end{aligned}$$
(14)
For small sampling time T, the last term can be neglected compared to former ones. Since the eigenvalues \(\tilde{B}_{i}\) have negative real parts and Assumption 2 holds, the eigenvalues of \(T(\pi^{T}\otimes I_{4M^{2}})\operatorname{diag}(\tilde{B}_{i}\otimes I_{2M}+I_{2M}\otimes\tilde{B}_{i})\) have negative real parts. Thus, following Theorem 4 in [6], the unit eigenvalues of \(\pi ^{T}\otimes I_{4M^{2}}\) are perturbed by the negative parts of \(T(\pi^{T}\otimes I_{4M^{2}})\operatorname{diag}(\tilde{B}_{i}\otimes I_{2M}+I_{2M}\otimes\tilde{B}_{i})\). Thus, the eigenvalues of Λ are less than 1. Then \(\rho(\Lambda )<1\) is obtained. System (13) solves the mean square consensus problem. Then \(\lim_{k\rightarrow\infty}E(\tilde{\xi}(k))^{2}=0\) is obtained. Therefore, we have \(\lim_{k\rightarrow\infty }E(((I_{M}-D_{1})\otimes I_{2})\eta_{F}(k)-(D_{2}\otimes I_{2})\eta_{L}(k))^{2}=0\), which implies \(\lim_{k\rightarrow\infty }E(\eta_{F}(k)-((I_{M}-D_{1})^{-1}D_{2}\otimes I_{2})\eta_{L}(k))^{2}=0\). The mean square containment control of system (1) can solve the containment control problem in the mean square sense. The proof is completed. □

Remark 1

We extend the results in [1] to the case under Markov switching topologies. The mean square consensus of multi-agent systems was investigated in [6], while the mean square containment control problems are discussed in this paper.

Remark 2

We can extend the results of Theorem 2 to the mean square containment control problem of high-order discrete-time multi-agent systems. In order to save space, we omit it here.

3 Mean square containment control for continuous-time multi-agent systems

In this section, we discuss containment control for the continuous-time multi-agent systems under Markov switching topologies. Each follower’s dynamic is denoted
$$ \begin{aligned} &\dot{x}_{i}(t) = v_{i}(t), \\ &\dot{v}_{i}(t) = u_{i}(t),\quad i=1,\ldots, M, \end{aligned} $$
(15)
where \(x_{i}(t)\), \(v_{i}(t)\), \(u_{i}(t)\) denote position, velocity, input control of the ith follower, respectively. The leader’s dynamic can be described as follows:
$$ \begin{aligned} &\dot{x}_{i}(t) = v_{i}(t), \\ &\dot{v}_{i}(t) = 0,\quad i=M+1,\ldots, N, \end{aligned} $$
(16)
where \(x_{i}(t)\), \(v_{i}(t)\) denote the ith leader’s position and velocity, respectively. The containment algorithm is proposed as follows:
$$ u_{i}(t)=\sum_{j\in F\cup L}a_{ij} \bigl(x_{j}(t)-x_{i}(t)\bigr)+\sum _{j\in F\cup L}a_{ij}\bigl(v_{j}(t)-v_{i}(t) \bigr), $$
(17)
where F and L are the same as those in Theorem 1. \(L=[a_{ij}]_{N\times N}\) is the Laplacian matrix, which can be divided into four parts:
$$ L=\left [ \begin{array}{@{}c@{\quad}c@{}} L_{1}& L_{2} \\ 0 & 0 \end{array} \right ], $$
(18)
where \(L_{1}\in R^{M\times M}\), \(L_{2}\in R^{M\otimes(N-M)}\). Set \(\zeta_{i}(t)=[x_{i}^{T}(t)\ v_{i}^{T}(t)]^{T}\), \(\zeta_{F}(t)=[\zeta_{1}^{T}(t)\cdots \zeta_{M}^{T}(t)]^{T}\), \(\zeta_{L}(t)=[\zeta_{M+1}^{T}(t)\cdots \zeta_{N}^{T}(t)]^{T}\), \(\xi_{i}(t)=\sum_{j\in F\cup L}a_{ij}(\zeta_{i}(t)-\zeta_{j}(t))\), \(\xi(t)=[\xi_{1}^{T}(t)\cdots \xi_{M}^{T}(t)]^{T}\). Then we obtain \(\xi(t)=(L_{1}\otimes I_{2})\zeta_{F}+(L_{2}\otimes I_{2})\zeta_{L}\). Combining (15)-(17), we have
$$\begin{aligned}& \begin{aligned}[b] \dot{\zeta}_{F} ={}&\left (I_{M} \otimes \left [ \begin{array}{@{}c@{\quad}c@{}}0 & 1 \\ 0 & 0 \end{array} \right ] \right ) \zeta_{F}(t)-\left (L_{1}\otimes \left [ \begin{array}{@{}c@{\quad}c@{}}0 & 0 \\ 1 & 1 \end{array} \right ] \right )\zeta_{F}(t) \\ &{}-\left (L_{2}\otimes \left [ \begin{array}{@{}c@{\quad}c@{}}0 & 0 \\ 1 & 1 \end{array} \right ] \right )\zeta_{L}(t), \end{aligned} \\& \begin{aligned} \dot{\zeta}_{L}(t)= \left (I_{N-M} \otimes \left [ \begin{array}{@{}c@{\quad}c@{}}0 & 1 \\ 0 & 0 \end{array} \right ] \right ) \zeta_{L}(t). \end{aligned} \end{aligned}$$
(19)
Taking the derivation of \(\xi(t)\), we have
$$ \dot{\xi}(t)=(L_{1}\otimes I_{2})\dot{ \zeta}_{F}+(L_{2}\otimes I_{2})\dot{ \zeta}_{L}. $$
(20)
In view of \(\zeta_{F}(t)=(L_{1}^{-1}\otimes I_{2})\xi(t)-(L_{1}^{-1}L_{2}\otimes I_{2})\zeta_{L}(t)\), (20) can be written as follows:
$$ \dot{\xi}(t)=\Psi\xi(t), $$
(21)
where \(\Psi= \bigl(I_{M}\otimes \bigl[{\scriptsize\begin{matrix}0 &1 \cr 0 & 0\end{matrix}} \bigr] \bigr)- \bigl(L_{1}\otimes \bigl[{\scriptsize\begin{matrix}0 & 0 \cr 1 & 1\end{matrix}} \bigr] \bigr)\). There exists an orthogonal matrix W, such that
$$\begin{aligned} \tilde{\Psi} =&W^{T}\psi W \\ =&\left [ \begin{array}{@{}c@{\quad}c@{}} -L_{1} & -L_{1}\\ I_{M}& 0 \end{array} \right ]. \end{aligned}$$
(22)
Set \(\xi(t)=W\tilde{\xi}(t)\). Then we have
$$ \dot{\tilde{\xi}}(t)=\tilde{\Psi}\tilde{\xi}(t). $$
(23)

Lemma 2

Under Assumption  2, all eigenvalues of \(L_{1}\) have positive real parts, and \(L= \bigl[{\scriptsize\begin{matrix} L_{1} &L_{2}\cr 0 & 0 \end{matrix}} \bigr]\) is the Laplacian matrix corresponding to the union set of \(\{\mathcal{G}_{1},\ldots,\mathcal{G}_{s}\}\).

Proof

From Lemma 1 in [1], we obtain the results. □

From Lemma 2, we know all eigenvalues of \(L_{1}\) have positive real parts. For the characteristic polynomial of the matrix \(\tilde{\Psi}\), we have
$$ \lambda_{i}^{2}+\mu_{i}\lambda_{i} + \mu_{i}=0, $$
(24)
where \(\lambda_{i}\) is the eigenvalue of \(\tilde{\Psi}\), \(\mu_{i}\) is the eigenvalue of \(L_{1}\), \(i=1,\ldots,M\). From Lemma 2, we know the \(\mu_{i}\) have a positive real part. Then we find that the \(\lambda_{i}\) have a negative real part. Then all eigenvalues of \(\tilde{\Psi}\) have negative real parts.
Under Markov switching topologies, systems (23) is changed into the following form:
$$ \dot{\tilde{\xi}}(t)=\tilde{\Psi}_{\sigma(t)}\tilde{\xi}(t), $$
(25)
where \(\sigma(t)\) is a Markov switching process and takes values from the sets \(S=\{1,\ldots,s\}\). We have
$$ \operatorname{Pr}\bigl\{ \sigma(t+h)=j\mid \sigma(t)=i\bigr\} =\pi_{ij}h +\circ(h),\quad i \neq j, h>0, $$
(26)
where \(\pi_{ij}\) is the transition rate from i to j and \(\pi_{ii}=-\sum_{j\neq i}\pi_{ij}\) for \(i=j\).

Theorem 3

Suppose Assumption  2 is satisfied. Under Markov switching topologies and the containment algorithm (17), system (15) can solve the mean square containment control problem.

Proof

Since the union communication topology satisfies Assumption 2, we know all eigenvalues of \(\tilde{\Psi}_{i}\) have negative real parts. Then there exists a positive definite matrix P, such that \(\tilde{\Psi}_{i}^{T}P+P\tilde{\Psi}_{i}<0\) holds. Choose the Lyapunov function as follows:
$$ V_{i}(t)=E\bigl[\tilde{\xi}^{T}(t)P\tilde{ \xi}(t)\mathbf {1}_{\sigma(t)=i}\bigr],\quad i\in S. $$
(27)
Then, calculating the derivative of \(V_{i}(t)\), we have
$$\begin{aligned} dV_{i}(t) =&E\bigl[\bigl(d\tilde{\xi}(t)\bigr)^{T}P\tilde{ \xi}(t)\mathbf {1}_{\sigma (t)=i}+\tilde{\xi}^{T}(t)P \,d\tilde{\xi}(t) \mathbf {1}_{\sigma(t)=i}\bigr]+\sum_{j=1}^{s} \pi _{ij}V_{j} \\ =&E\bigl[\tilde{\xi}^{T}(t) \bigl(\tilde{\Psi}_{i}^{T}(t)P+P \tilde{\Psi }_{i}(t)\bigr)\tilde{\xi}(t)\bigr] \\ < &0. \end{aligned}$$
(28)
Then, following the method in [13], we obtain \(\lim_{t\rightarrow\infty} E(\tilde{\xi}(t))^{2}=0\). Thus, we have \(\lim_{t\rightarrow\infty} E(\zeta_{F}(t)+(L_{1}^{-1}L_{2}\otimes I_{2})\zeta_{L}(t))^{2}=0\). Then system (15) can solve the mean square containment control problem. The proof is completed. □

Remark 3

In [11], containment control for second-order multi-agent systems was discussed under random switching topologies. However, in this paper, we have given another method to solve mean square containment control problem, which is different from the one in [11].

Next, we extend Theorem 3 to the case of a high-order multi-agent system. Consider the follower’s behavior of the high-order multi-agent system as follows:
$$ \dot{x}_{i}(t)=Ax_{i}(t)+Bu_{i}(t), \quad i=1,\ldots,M, $$
(29)
where \(x_{i}(t)\), \(u_{i}(t)\) are position and input control of the ith agent, respectively, A and B are constant matrices.
The behavior of the leader’s dynamic is described as follows:
$$ \dot{x}_{i}(t)=Ax_{i}(t),\quad i=M+1, \ldots,N, $$
(30)
where \(x_{i}(t)\) is the ith leader’s position, A is a constant matrix. The containment protocol is given
$$ u_{i}(t)=K\sum_{j\in F\cup L} a_{ij}\bigl(x_{j}(t)-x_{i}(t)\bigr), $$
(31)
where K is the gain matrix, F, L is the same as the ones in Theorem 1. Set \(\xi_{i}(t)=\sum_{j\in F\cup L}a_{ij}(x_{i}(t)-x_{j}(t))\), \(\xi(t)=[\xi_{1}^{T}(t)\cdots \xi_{M}^{T}(t)]^{T}\), \(X_{F}(t)=[x_{1}^{T}(t)\cdots x_{M}^{T}(t)]^{T}\), \(X_{L}(t)= [x_{M+1}(t)\cdots x_{N}(t)]\). By using similar methods in Theorem 3, we have
$$ \begin{aligned} &\dot{X}_{F}(t) = \bigl((I_{M}\otimes A)-(L_{1}\otimes BK) \bigr)X_{F}(t)-(L_{2}\otimes BK)X_{L}(t) \\ &\dot{X}_{L}(t) = (I_{M}\otimes A) X_{L}(t). \end{aligned} $$
(32)
In view of \(\xi_{i}(t)=\sum_{j\in F\cup L}a_{ij}(x_{i}(t)-x_{j}(t))\), we have
$$ \dot{\xi}(t)=\bigl((I_{M}\otimes A) -(L_{1} \otimes BK)\bigr) \xi(t). $$
(33)
Since switching topologies are driven by a Markov process, (33) can be rewritten as follows:
$$ \dot{\xi}(t)=F_{\sigma(t)}\xi(t), $$
(34)
where \(F=(I_{M}\otimes A) -(L_{1}\otimes BK) \), \(\sigma(t)\), π and \(\pi_{ij}\) are the same as the ones in Theorem 3.

Theorem 4

Assume that \((A, B)\) is stabilizable and Assumption  2 holds. Under the containment control protocol (31) with \(K=\theta B^{T}P\), system (29) can be achieved by the mean square containment control under the Markov switching topologies, where \(\theta\geq\frac{1}{\min\{\bar{\pi}_{i}\}\epsilon}\), ϵ is the minimum eigenvalue of the matrix \(L_{1}+L_{1}^{T}\), P is defined in (35).

Proof

Since \((A,B)\) is stabilizable, we have
$$ A^{T}P+PA-2PBB^{T}P< 0. $$
(35)
Then we choose the Lyapunov function as follows:
$$ V_{i}(t)=E \bigl[\xi^{T}(t) (I_{N} \otimes P)\xi(t)\mathbf {1}_{\sigma=i} \bigr], \quad i\in\{1,\ldots,s\}. $$
(36)
Taking the derivative of \(V_{i}(t)\), we have
$$ dV_{i}(t)=E\bigl[2\bigl(d\xi(t)\bigr)^{T}(I_{M} \otimes P)\xi(t)\mathbf {1}_{\sigma=i}\bigr]+\sum_{j=1}^{s} \pi_{ij}V_{j}(t)\,dt+\circ(dt). $$
(37)
Assume that \(\sigma(t)\) begins in the invariant distribution \(\bar{\pi}=[\bar{\pi}_{1},\ldots,\bar{\pi}_{s}]^{T}\), we have
$$ \frac{dV(t)}{dt} \leq E\biggl[\xi^{T}(t) \biggl(I_{M} \otimes\bigl(A^{T}P+PA\bigr)-\frac{L_{1}+L_{1}^{T}}{\lambda _{\mathrm{min}}(L_{1}+L_{1}^{T})}\otimes PBB^{T}P \biggr)\xi(t)\biggr], $$
where \(\lambda_{\mathrm{min}}(L_{1}+L_{1}^{T})\) is the minimum eigenvalue of \(L+L^{T}\). Then we have
$$\begin{aligned}& \xi^{T}(t) \biggl(I_{M}\otimes\bigl(A^{T}P+PA \bigr)-\frac{L_{1}+L_{1}^{T}}{\lambda _{\mathrm{min}}(L_{1}+L_{1}^{T})}\otimes PBB^{T}P\biggr)\xi(t) \\& \quad= \sum_{j=1}^{M}\xi_{j}^{T}(t) \biggl(A^{T}P+PA-\frac{2\lambda _{j}(L_{1}+L_{1}^{T})}{\lambda_{\mathrm{min}}(L_{1}+L_{1}^{T})} PBB^{T}P\biggr) \xi_{j}(t) \\& \quad\leq \sum_{j=1}^{M} \xi_{j}^{T}(t) \bigl(A^{T}P+PA-2 PBB^{T}P \bigr)\xi_{j}(t) \\& \quad< 0. \end{aligned}$$
(38)
Then, following the method in Theorem 3, we obtain \(\lim_{t\rightarrow\infty}E(\xi(t))^{2}=0\). Then \(\lim_{t\rightarrow\infty}E(X_{F}(t)+L_{1}^{-1}L_{2}X_{L}(t))^{2}=0\). Therefore, mean square containment control problem for system (29) can be achieved. The proof is completed. □

Remark 4

In [8], the mean square consensus problem of the high-order multi-agent systems was investigated, while the mean square containment control of high-order multi-agent systems has been discussed in this paper.

4 Conclusions

In the paper, we have investigated the mean square containment control for discrete- and continuous-time second-order multi-agent systems under Markov switching topologies, respectively. By using graph theory and the tools of stochastic analysis, sufficient conditions for mean square containment control are derived. In addition, we extend the results of the second-order multi-agent systems to that of the high-order multi-agent system. Moreover, the topic of containment control problems of multi-agent systems with communication noise is interesting, which is our future work.

Declarations

Acknowledgements

This work was supported by Natural Science Fundamental Research Project of Jiangsu Colleges and Universities under Grants 14KJB120006, the Natural Science Foundation of Jiangsu Province under Grant BK20140045, the Scientific Research Foundation of Nanjing University of Information Science and Technology under Grant 2013X047, C-MEIC in School of Information and Control of Nanjing University of Information Science and Technology, Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology of Nanjing University of Information Science and Technology.

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
School of Information and Control, Nanjing University of Information Science and Technology

References

  1. Li, ZK, Ren, W, Liu, XD, Fu, MY: Distributed containment control of multi-agent systems with general linear dynamics in the presence of multiple leaders. Int. J. Robust Nonlinear Control 23, 534-547 (2013) View ArticleMATHMathSciNetGoogle Scholar
  2. Li, JZ, Ren, W, Xu, SY: Distributed containment control with multiple dynamic leaders for double-integrator dynamics using only position measurements. IEEE Trans. Autom. Control 57, 1553-1559 (2012) View ArticleMathSciNetGoogle Scholar
  3. Ma, Q, Lewis, FL, Xu, SY: Cooperative containment of discrete-time linear multi-agent systems. Int. J. Robust Nonlinear Control 25, 1007-1018 (2015) View ArticleMathSciNetMATHGoogle Scholar
  4. Liu, K, Xie, G, Wang, L: Containment control for second-order multi-agent systems with time-varying delays. Syst. Control Lett. 67, 24-31 (2014) View ArticleMATHMathSciNetGoogle Scholar
  5. Liu, HY, Xie, GM, Wang, L: Necessary and sufficient conditions for containment control of networked multi-agent systems. Automatica 48, 1415-1422 (2012) View ArticleMATHMathSciNetGoogle Scholar
  6. Zhang, Y, Tian, YP: Consentability and protocol design of multi-agent systems with stochastic switching topology. Automatica 45, 1195-1201 (2009) View ArticleMATHMathSciNetGoogle Scholar
  7. Zhao, HY, Ren, W, Yuan, DM, Chen, J: Distributed discrete-time coordinated tracking with Markovian switching topologies. Syst. Control Lett. 61, 766-772 (2012) View ArticleMATHMathSciNetGoogle Scholar
  8. You, KY, Li, ZK, Xie, LH: Consensus conditions for linear multi-agent systems over randomly switching topologies. Automatica 49, 3125-3132 (2013) View ArticleMathSciNetMATHGoogle Scholar
  9. Zhou, J, Wang, Q: Convergence speed in distributed consensus over dynamically switching random networks. Automatica 45, 1455-1461 (2009) View ArticleMATHMathSciNetGoogle Scholar
  10. Sun, FL, Guan, ZH, Zhan, XS: Consensus of second-order and high-order discrete-time multi-agent systems with random networks. Nonlinear Anal., Real World Appl. 13, 1979-1990 (2012) View ArticleMATHMathSciNetGoogle Scholar
  11. Lou, Y, Hong, Y: Target containment control of multi-agent systems with random switching interconnection topologies. Automatica 48, 879-885 (2012) View ArticleMATHMathSciNetGoogle Scholar
  12. Wang, YP, Cheng, L, Hou, ZG, Tian, M: Containment control of multi-agent systems in a noisy communication environment. Automatica 50, 1922-1928 (2014) View ArticleMATHMathSciNetGoogle Scholar
  13. Zhu, Q, Yang, X, Wang, H: Stochastically asymptotic stability of delayed recurrent neural networks with both Markovian jump parameters and nonlinear disturbances. J. Franklin Inst. 347, 1489-1510 (2010) View ArticleMATHMathSciNetGoogle Scholar

Copyright

© Miao and Li; licensee Springer. 2015