 Research
 Open Access
 Published:
Noisetolerant continuoustime Zhang neural networks for timevarying Sylvester tensor equations
Advances in Difference Equations volume 2019, Article number: 465 (2019)
Abstract
In this paper, to solve the timevarying Sylvester tensor equations (TVSTEs) with noise, we will design three noisetolerant continuoustime Zhang neural networks (NTCTZNNs), termed NTCTZNN1, NTCTZNN2, NTCTZNN3, respectively. The most important characteristic of these neural networks is that they make full use of the timederivative information of the TVSTEs’ coefficients. Theoretical analyses show that no matter how large the unknown noise is, the residual error generated by NTCTZNN2 converges globally to zero. Meanwhile, as long as the design parameter is large enough, the residual errors generated by NTCTZNN1 and NTCTZNN3 can be arbitrarily small. For comparison, the gradientbased neural network (GNN) is also presented and analyzed to solve TVSTEs. Numerical examples and results demonstrate the efficacy and superiority of the proposed neural networks.
Introduction
As is well known, tensors are higher order generalizations of matrices, which are common tools to construct the mathematical models of systems in high dimension. For example, a black and white image (including width and height) can be stored as a matrix, while an RGB image (including width, height and brightness) is often stored as a threeorder tensor, and a color video image (including width, height, brightness and time) must be stored as a fourorder tensor. An mthorder real tensor \(\mathcal{A}=(a_{i_{1}i_{2}\ldots i_{m}})\) (\(a_{i_{1}i_{2}\ldots i _{m}}\in \mathbb{R}\), \(1\leq i_{j}\leq I_{j}\), \(j\in \langle m\rangle =\{1,2,\ldots,m\}\)) is a multidimensional array with \(I_{1}I_{2}\cdots I _{n}\) entries. Clearly, an order one tensor is a vector, and an order two tensor is a matrix. Let \(\mathbb{R}^{I_{1}\times \cdots \times I _{n}}\) be the set of order n, dimension \(I_{1}\times \cdots \times I_{n}\) tensors over \(\mathbb{R}\). For any two tensors \(\mathcal{A} \in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times K_{1}\times \cdots \times K_{n}}\) and \(\mathcal{B}\in \mathbb{R}^{K_{1}\times \cdots \times K_{n}\times J_{1}\times \cdots \times J_{m}}\), the Einstein product \(\mathcal{A}*_{n}\mathcal{B}\) is defined by [1]
which indicates that \(\mathcal{A}*_{n}\mathcal{B}\in \mathbb{R}^{I _{1}\times \cdots \times I_{n}\times J_{1}\times \cdots \times J_{m}}\). It is obvious that, when \(m=n=1\), the Einstein product (1) reduces to a matrix product.
In practice, various kinds of tensor equations arise from physics, mechanics, Markov process and partial differential equations. In this paper, we are interested in the following timevarying Sylvester tensor equations (TVSTEs) via the Einstein product:
where \(\mathcal{A}(t)\in \mathbb{R}^{I_{1}\times \cdots \times I_{n} \times I_{1}\times \cdots \times I_{n}}\), \(\mathcal{B}(t)\in \mathbb{R}^{K_{1}\times \cdots \times K_{m}\times K_{1}\times \cdots \times K_{m}}\), \(\mathcal{C}(t)\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times K_{1}\times \cdots \times K_{m}}\) are input tensors, and \(\mathcal{X}\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times K_{1}\times \cdots \times K_{m}}\) is an unknown tensor to be determined. TVSTEs can be used to discrete a linear partial differential equation by the finite element and finite difference. For example, for noncentrosymmetric materials in physics, the linear piezoelectric equation is expressed as [2]:
where \(\mathcal{A}\in \mathbb{R}^{3\times 3\times 3}\) is a piezoelectric tensor, and \(X\in \mathbb{R}^{3\times 3}\) is a stress tensor, and \(p\in \mathbb{R}^{3}\) is an electric change density displacement. The twodimensional (2D) Poisson problem is
where f is a given function, and the Laplace operator \(\nabla ^{2}v\) is defined as
By the standard central difference approximations, Poisson’s equation (3) in two dimensions can be depicted as the following fourorder tensor representation [3]:
where \(\mathcal{A}\in \mathbb{R}^{n\times n\times n\times n}\), \(X\in \mathbb{R}^{n\times n}\) and \(C\in \mathbb{R}^{n\times n}\) are the discretized functions v and f on the unit square mesh. Similarly, the threedimensional (3D) discretized Poisson problem can be depicted as [3]:
where \(\mathcal{A}\in \mathbb{R}^{n\times n\times n\times n\times n \times n}\), \(\mathcal{X}\) and \(\mathcal{C}\in \mathbb{R}^{n\times n \times n}\).
TVSTEs include the following special cases:
 (1).
If \(m=n=1\), TVSTEs reduce to the timevarying Sylvester matrix equations (TVSMEs) [4]:
$$ {A}(t){X}+{X} {B}(t)={C}(t), \quad t\geq 0, $$(4)where \(A(t)\), \(B(t)\), \(C(t)\), and X are matrices with compatible dimension; and if \({A}(t)\), \({B}(t)\), and \({C}(t)\) are further time invariant, TVSMEs reduce to the following classic Sylvester matrix equations (SMEs):
$$ {A} {X}+X{B}={C}, $$(5)which again includes the wellknown Lyapunov matrix equations and the Stein matrix equations as its special cases [5,6,7]. The SMEs serves as a basic model arising from control theory, system theory, stability analysis etc.
 (2).
If \(\mathcal{A}(t)\), \(\mathcal{B}(t)\) and \(\mathcal{C}(t)\) are time invariant, TVSTEs reduce to the Sylvester tensor equations (STEs) [8]:
$$ \mathcal{A}*_{n}\mathcal{X}+ \mathcal{X}*_{m}\mathcal{B}=\mathcal{C}, $$(6)which comes from the finite difference, finite element or spectral methods [9, 10] and plays an important role in discretization of the linear partial differential equations in high dimension.
A very basic and important problem in the study of the above equations concerns their solutions. In the past several decades, many researchers have carried out their work to find analytical and numerical solutions of SMEs, TVSMEs and STEs. For example, Ding and Chen [11] presented a gradientbased iterative algorithm for SMEs. Zhang et al. [4] introduced a recurrent neural network with implicit dynamics for the approximate solution of TVSMEs. Wang and Xu [12] developed some iterative algorithms for solving some tensor equations, which were generalized by Huang and Ma [13] to solve STEs. When the above equations are inconsistent, Lv and Zhang [14] designed an iterative algorithm to find the least squares solutions of SMEs. Sun and Wang [7] extended the conjugate gradient method to get the least squares solution with the least Frobenius norm of the generalized periodic SMEs. Specifically, during the past ten years, Hajarian et al. have conducted intensive research on the iterative method for solving various matrix equations, such as the generalized conjugate direction algorithm for solving the general coupled matrix equations over symmetric matrices [15], the finite algorithm for solving the generalized nonhomogeneous Yakubovichtranspose matrix equation [16], and the symmetric solutions of general Sylvester matrix equations via the Lanczos version of the biconjugate residual algorithm [17]. A tensor form of the conjugate gradient method was given to solve inconsistent tensor equations [12, 13]. Furthermore, by virtue of the Moore–Penrose inverse and the \(\{1\}\)inverse of the tensor, Sun et al. [8] obtained analytic solutions of some special linear tensor equations. Other iterative methods for matrix/tensor equations and their applications can be found in [18,19,20,21,22,23,24,25,26,27] and the references therein.
To the best of the authors’ knowledge, no research has been devoted to the solutions of the TVSTEs. In this paper, we are going to extend a special kind of recurrent neural networks, i.e., the continuoustime Zhang neural network (CTZNN), to solve TVSTEs. ZNN was proposed by Yunong Zhang in March 2001 [4], which is quite suitable to solve various timevarying problems, such as timevarying nonlinear optimization [28,29,30], timevarying convex quadratic programming [31], timevarying matrix pseudoinversion [32] and timevarying absolute value equations [33]. Motivated by [4, 12, 13], we design three noisetolerant CTZNNs (NTCTZNN13) for solving TVSTEs (2). Convergence analysis shows that: (1) the residual error \(\Vert \mathcal{A}(t)*_{n}\mathcal{X}+\mathcal{X}*_{m}\mathcal{B}(t) \mathcal{C}(t) \Vert \) generated by NTCTZNN1 and NTCTZNN3 can be arbitrarily small if their design parameters are large enough; (2) the residual error of NTCTZNN2 converges to zero globally no matter how large the unknown noise is, where the Frobenius norm \(\Vert \cdot \Vert \) of \(\mathcal{A}\) is defined as \(\Vert \mathcal{A} \Vert = ( \sum_{i_{1}\cdots i_{n}j_{1}\cdots j_{n}} \vert a_{i_{1}\cdots i_{n}j_{1} \cdots j_{n}} \vert ^{2} )^{1/2}\) for a tensor \(\mathcal{A}\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times J_{1}\times \cdots \times J_{n}}\).
The remainder of this paper is organized as follows. In Sect. 2, we introduce some necessary notations that are essential to derive main results of this paper. In Sect. 3, we propose three NTCTZNNs and GNN methods to solve TVSTEs, and establish their convergence. In Sect. 4, to verify the effectiveness of NTCTZNN to solve TVSTEs, we present two numerical examples. Finally, some brief conclusions are drawn in Sect. 5.
Preliminaries
In this section, we introduce some useful notations and recall some known results.
Definition 2.1

(1)
Let \(\mathcal{A}= (a_{i_{1}\cdots i_{n}j_{1}\cdots j_{m}} )\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times J_{1}\times \cdots \times J_{m}}\), \(\mathcal{B}= (b_{j_{1}\cdots j_{m} i_{1}\cdots i _{n}} )\in \mathbb{R}^{J_{1}\times \cdots \times J_{m}\times I_{1} \times \cdots \times I_{n}}\), if \(a_{i_{1}\cdots i_{n}j_{1}\cdots j _{m}}=b_{j_{1}\cdots j_{m} i_{1}\cdots i_{n}}\), then the tensor \(\mathcal{B}\) is called the transpose of \(\mathcal{A}\), denoted by \(\mathcal{A}^{\top }\).

(2)
A tensor \(\mathcal{D}= (d_{i_{1}\cdots i_{n}j_{1}\cdots j_{n}} )\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times J_{1} \times \cdots \times J_{n}}\) is a diagonal tensor if \(d_{i_{1}\cdots i_{n}j_{1}\cdots j_{n}}=0\) in the case that the indices \(i_{1}\cdots i_{n}\) are different from \(j_{1}\cdots j_{n}\). Furthermore, in this case if all the diagonal entries \(d_{i_{1}\cdots i_{n}i_{1}\cdots i_{n}}=1\), then \(\mathcal{D}\) is called a unit tensor, denoted by \(\mathcal{I}\). Specially, if all the entries of a tensor are zero, we say this tensor a zero tensor, denoted by \(\mathcal{O}\).

(3)
The trace of a tensor \(\mathcal{A}\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times I_{1}\times \cdots \times I_{n}}\) is defined as \(\operatorname{tr}(\mathcal{A})=\sum_{i_{1}\cdots i_{n}i_{1}\cdots i_{n}}a _{i_{1}\cdots i_{n}i_{1}\cdots i_{n}}\).

(4)
Let \(\mathcal{A},\mathcal{B}\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times I_{1}\times \cdots \times I_{n}}\), the inner product of \(\mathcal{A}\), \(\mathcal{B}\) is defined by
$$ \langle \mathcal{A},\mathcal{B}\rangle = \operatorname{tr}\bigl(\mathcal{B}^{ \top }*_{n}\mathcal{A} \bigr). $$(7)
In the following, we set \(I=I_{1}I_{2}\cdots I_{n}\), \(J=J_{1}J_{2} \cdots J_{n}\), \(K=K_{1}K_{2}\cdots K_{m}\). Motivated by Definition 2.3 in [12], we give the following definition.
Definition 2.2
Define the transformation \(\varPhi _{IK}: \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times K_{1}\times \cdots \times K_{m}}\rightarrow \mathbb{R}^{I\times K}\) with \(\varPhi _{IK}( \mathcal{A})=A\) defined componentwise as
where \(\mathcal{A}\in \mathbb{R}^{I_{1}\times \cdots \times I_{n} \times K_{1}\times \cdots \times K_{m}}\), \(A\in \mathbb{R}^{I\times K}\), \(s=i_{n}+\sum_{p=1}^{n1}((i_{p}1)\prod_{q=p+1}^{n}I_{q})\) and \(t=k_{m}+\sum_{p=1}^{m1}((k_{p}1)\prod_{q=p+1}^{m}K_{q})\). Note we use the convention \(\sum_{p=1}^{0}a_{p}=0\).
Example 2.1
Let \(\mathcal{A}=(a_{i_{1}i_{2}k_{1}k_{2}k_{3}}) \in \mathbb{R}^{2\times 2\times 2\times 2\times 2}\) be a tensor such that
By Definition 2.2, we have
Remark 2.1
Given the positive integers \(I_{1},\ldots,I _{n}\), \(K_{1},\ldots,K_{m}\), we can define an inverse function of \(\varPhi _{IK}: \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times K_{1} \times \cdots \times K_{m}}\rightarrow \mathbb{R}^{I\times K}\) defined in Definition 2.2 as follows:
with \((A)_{st}\rightarrow (\mathcal{A})_{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}\), where the jth column of the matrix A consists the jth element in the set \(\{\mathcal{A}(:,\ldots,:,k_{1},\ldots,k_{m})\mid \forall k_{1},\ldots,k_{m}\}\). Here we sort all the elements in this set in lexicographic order, that is, from \((1,\ldots,1)\) to \((K_{1},\ldots,K_{m})\).
It is well known that using the Kronecker product and the vectorization operator, one can transform a linear matrix equations into a linear equations. Similarly, the following proposition indicates that the transformation \(\varPhi _{IK}\) can transform a linear tensor equations into a linear matrix equations.
Proposition 2.1
([12])
Let \(\varPhi _{IJ}\)be defined as Definition 2.2. Then
where the tensors \(\mathcal{A}\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times J_{1}\times \cdots \times J_{n}}\), \(\mathcal{X} \in \mathbb{R}^{J_{1}\times \cdots \times J_{n}\times K_{1}\times \cdots \times K_{m}}\), \(\mathcal{C}\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times K_{1}\times \cdots \times K_{m}}\).
The following lemma is a direct generalization of the derivative principle of a matrix.
Lemma 2.1
Let \(\mathcal{A}(t)\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times J_{1}\times \cdots \times J_{n}}\), \(\mathcal{X}(t)\in \mathbb{R}^{J_{1}\times \cdots \times J_{n}\times K_{1}\times \cdots \times K_{m}}\). We have
Proof
Assume that \(\varPhi _{IJ}(\mathcal{A}(t))=A(t)\), \(\varPhi _{JK}(\mathcal{X}(t))=X(t)\). Then, by Definition 2.2, one has \(A(t)\in \mathbb{R}^{IJ}\), \(X(t)\in \mathbb{R}^{JK}\). By the derivative principle of a matrix, we have
where s, v and w are defined as in Definition 2.1, and the secondtolast equality comes from Proposition 2.1. This completes the proof. □
NTCTZNNs and GNN for TVSTEs
In this section, we will present three noisetolerant continuoustime Zhang neural networks (NTCTZNNs) and a gradientbased neural network (GNN) for TVSTEs, and we analyze their convergence property.
Firstly, following Zhang et al.’s design method [4], we define a tensorvalued indefinite errorfunction as follows:
where every entry denoted by \(e_{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}(t) \in \mathbb{R}\) may be negative or unbounded.
Then, let us recall the Zhang neural network (ZNN) design formula (also termed Zhang et al.’s design formula)
where \(\gamma >0\) is a design parameter, \(F(\cdot )\) is a tensor activation function defined on every entry of the errorfunction \(\mathcal{E}(t)\), and \(\dot{\mathcal{E}}(t)\) is a componentwise derivative of the errorfunction \(\mathcal{E}(t)\). By (10) in Lemma 2.1, substituting the errorfunction \(\mathcal{E}(t)\) into the above Zhang neural network (ZNN) design formula, we get a continuoustime ZNN (CTZNN):
where \(\dot{\mathcal{A}}(t)\), \(\dot{\mathcal{B}}(t)\), \(\dot{\mathcal{C}}(t)\) denote the timederivatives of tensors \(\mathcal{A}(t)\), \(\mathcal{B}(t)\), \(\mathcal{C}(t)\), respectively.
Remark 3.1
Let \(f(\cdot )\) be the entry of the tensorvalued function \(F(\cdot )\). The function \(f(\cdot )\) can be set as any odd and strictly monotone increasing function. There are six basic types of activation functions in the literature, i.e., the linear activation function, the power activation function, the bipolar sigmoid activation function, the powersigmoid activation function, the signbipower activation function and the powersum activation function [35]. Note that different activation functions result in different numerical performance of CTZNN (12). Generally speaking, the performance of CTZNN with powersigmoid activation function is better than that of CTZNN with linear activation function, and CTZNN with signbipower activation function often converges in a limited time.
CTZNN (12) is suitable to solve TVSTEs without noise. However, noise is ubiquitous, which should not be ignored in real life. In the following, we only consider the constant noise for simplicity, which is denoted by \(\eta (t)=\eta \). Setting \(\bar{\eta }=(\eta )\in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times K_{1}\times \cdots \times K_{m}}\), whose entries all equal to η. Based on the results about CTZNNs in the literature [36,37,38], we are going to propose three noisetolerant CTZNNs for TVSTEs.
(1) Setting \(e(t)=\mathcal{E}(t)\) in the following improved Zhang design formula:
we get the first noisetolerant CTZNN (NTCTZNN1) for TVSTEs:
whose convergence is given in the following theorem.
Theorem 3.1
NTCTZNN1 (14) converges to the theoretical solution of TVSTEs with the limit of the residual error being \(\sqrt{IK}\eta /\gamma \).
Proof
Obviously, NTCTZNN1 (14) can be written as
which can be decoupled into the following IK differential equations:
which has a closedform solution:
Furthermore, we can have
So
This completes the proof. □
(2) Setting \(e(t)=\mathcal{E}(t)\) in the following improved Zhang design formula [36]:
we get the second noisetolerant CTZNN (NTCTZNN2) for TVSTEs:
Theorem 3.2
No matter how large the unknown noiseηis, NTCTZNN2 (16) converges to the theoretical solution of TVSTEs globally.
Proof
Obviously, NTCTZNN2 (16) can be written as
which can be decoupled into IK differential equations:
Taking Laplace transformation on both sides of (17), one has
where \(\varepsilon _{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}(t)\) is the image function of \(e_{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}(t)\). From (18), we have
Two poles of its transfer function are
which are located on the left halfplane for any \(\gamma _{1}, \gamma _{2}>0\). Thus the system (17) is stable and the final value theorem holds. That is,
This completes the proof. □
Remark 3.2
If \(\gamma _{2}=0\), then NTCTZNN2 (16) reduces NTCTZNN1 (14). Though NTCTZNN2 (16) is more complicate than NTCTZNN1 (14) when \(\gamma _{2}>0\), Theorems 3.1 and 3.2 indicate that NTCTZNN2 (16) is more stable than NTCTZNN1 (14) in this case.
Remark 3.3
In the practical computation, to avoid the integral manipulation, we would better transform the integrodifferential equations NTCTZNN1 (14) into the following differential equations:
(3) Setting \(e(t)=\mathcal{E}(t)\) in the following improved Zhang design formula:
where \(F(\cdot )\), \(G(\cdot )\) are two activationfunction arrays, we get the third noisetolerant CTZNN (NTCTZNN3) for TVSTEs:
Obviously, NTCTZNN3 (21) can be written as
which can be decoupled into IK differential equations:
Define an intermediate variable \(y_{i_{1}\cdots i_{n}k_{1}\cdots k _{m}}(t)\) as follows:
To establish the convergence of NTCTZNN3 (21), we make the following assumption about the constant noise η̄.
Assumption 3.1
The constant noise \(\bar{\eta }=(\eta ) \in \mathbb{R}^{I_{1}\times \cdots \times I_{n}\times K_{1}\times \cdots \times K_{m}}\) satisfies
where \(\alpha \in (0,1)\), and \(g(\cdot )\) is the entry of the activation function \(G(\cdot )\).
Theorem 3.3
Under Assumption 3.1, NTCTZNN3 (21) converges to the theoretical solution of TVSTEs with the limit of the residual error being \(\sqrt{IK}g^{1}(\eta /\gamma )\).
Proof
Taking the timederivative of (23) on both sides, one has
Then substituting (24) into (22), we have
For the differential equation (25), let us define a Lyapunov function candidate
whose timederivative is
Since \(g(\cdot )\) is an odd and monotone increasing function, we have \(y_{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}(t)\times g(y_{i_{1}\cdots i_{n}k_{1} \cdots k_{m}}(t))>0\) for \(y_{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}(t) \neq 0\), and \(y_{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}(t)g(y_{i_{1} \cdots i_{n}k_{1}\cdots k_{m}}(t))=0\) if and only if \(y_{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}(t)=0\). This indicates the negative definiteness of \(\dot{v}_{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}(t)\). Then, by Lyapunov stability theory, equilibrium \(g^{1}(\eta /\gamma _{2})\) of (25) is globally asymptotically stable. Then, by the definition of \(y_{i_{1}\cdots i_{n}k_{1}\cdots k_{m}}(t)\) in (23), one has
Since \(f(\cdot )\) is also an odd and monotone increasing function, one has
This and (26), \(\eta \geq 0\), \(\gamma _{1},\gamma _{2}>0\) imply that
So
This completes the proof. □
Remark 3.4
Obviously, when \(F(\cdot )\) and \(G(\cdot )\) are identity mappings, NTCTZNN3 (21) reduces to NTCTZNN2 (16).
At the end of this section, we develop a gradientbased neural network (GNN) to solve TVSTEs for comparison. Define a scalarvalued normbased energy function:
Then, based on Theorem 3.1 in [13], the gradient of the energy function \(\xi (\mathcal{X})\) is
where \(\mathcal{E}(t)=\mathcal{A}(t)*_{n}\mathcal{X}+\mathcal{X}*_{m} \mathcal{B}(t)\mathcal{C}(t)\). Thus, we get the following GNN:
where \(\gamma >0\) is a design parameter.
Remark 3.5
The most difference between GNN (27) and NTCTZNN13 lies in the former do not make use of the timederivative information \(\dot{\mathcal{A}}(t)\), \(\dot{\mathcal{B}}(t)\) and \(\dot{\mathcal{C}}(t)\) to enhance its efficiency.
In the following, the following theorem establishes the convergence of GNN (27) for static Sylvester tensor equations, i.e., \(\dot{\mathcal{A}}=\mathcal{O}\), \(\dot{\mathcal{B}}=\mathcal{O}\), \(\dot{\mathcal{C}}=\mathcal{O}\), whose proof is motivated by Theorems 1 and 2 in [38].
Theorem 3.4
As for the convergence of GNN (27), we have the following conclusions:
 (1)
GNN (27) exponentially converges to the theoretical solution of static Sylvester tensor equations without noise;
 (2)
the computational error of GNN (27) for static Sylvester tensor equations with noise is upper bounded. Furthermore, if the design parameterγtends to positive infinite, the steadystate solutionerror diminishes to zero.
Proof
Set \(\varPhi _{II}(\mathcal{A})=A\), \(\varPhi _{IK}(\mathcal{X}(t))=X(t)\), \(\varPhi _{KK}(\mathcal{B})=B\), \(\varPhi _{IK}(\mathcal{C})=C\), \(\varPhi _{IK}( \mathcal{E})=E\), \(\varPhi _{IK}(\bar{\eta })=\theta \). Then TVSTEs can be written as
i.e.,
So GNN (27) can be vectorized as
Assume \(\mathcal{X}^{*}\) is one solution of TVSTEs, and set \(\varPhi _{IK}(\mathcal{X}^{*})=X^{*}\). Then
From the above equations, one has
i.e.,
where \(M=A\oplus B^{\top }\). Setting \(y(t)=\operatorname{vec}(X(t)) \operatorname{vec}(X^{*})\), we have
Suppose α is the minimum eigenvalue of \(M^{\top }M\), which is assumed to be positive definite.
(1) For the static Sylvester tensor equations without noise,
Thus,
which implies that the neural state \(\mathcal{X}(t)\) converges to the theoretical solution \(\mathcal{X}^{*}\) with the exponential rate \(\alpha \gamma /2\).
(2) For the static Sylvester tensor equations with noise,
(i) If \(\alpha \gamma \vert y_{i}(t) \vert  \vert (\operatorname{vec}(\theta ))_{i} \vert \geq 0\), \(\forall i, t\), then according to the Lyapunov stability theory [39], the timevarying vector \(y(t)\) converges towards to zero as time evolves. Thus \(\mathcal{X}(t)\) converges to the theoretical solution \(\mathcal{X}^{*}(t)\).
(ii) If \(\alpha \gamma \vert y_{i}(t) \vert  \vert (\operatorname{vec}(\theta ))_{i} \vert \leq 0\), \(\exists i, t\), then the function \(v(y(t),t)\) maybe increasing. However, from the inequality \(\alpha \gamma \vert y_{i}(t) \vert  \vert (\operatorname{vec}(\theta ))_{i} \vert \leq 0\), it is easy to deduce that
Overall, one has
This completes the proof. □
Numerical results
In this section, two examples are presented to show the efficiency of the proposed NTCTZNN13 and GNN for solving TVSTEs. All experiments are performed on a Thinkpad laptop with Intel Core 2 CPU 2.10 GHZ and RAM 4.00 GM and written in Matlab R2014a. All firstorder ordinary equations encountered are solved by the builtin function ode45.
Example 4.1
Consider the timevarying Sylvester tensor equations (\(m=n=2\), \(I_{1}=I_{2}=K_{1}=K_{2}=2\)):
where
We set the noise \(\eta =1\) and the initial state tensor \(\mathcal{X} _{0}=\mathcal{I}\).
Firstly, we use the GNN with \(\gamma =1000\) or \(\gamma =1000\) to solve problem (28), and the generated residual errors \(\Vert \mathcal{E}(t) \Vert \) are shown in Fig. 1. The final residual errors generated by the GNN with \(\gamma =1000\) and the GNN with \(\gamma =1000\) are 0.0047 and 0.0024, respectively. Furthermore, the number of iterations of the GNN with \(\gamma =1000\) and the GNN with \(\gamma =1000\) are 136,041 and 272,077, respectively.
Secondly, we use NTCTZNN1 to solve problem (28). Figure 2 shows the residual error \(\Vert \mathcal{E}(t) \Vert \) generated by NTCTZNN1, and the final residual errors generated by NTCTZNN1 with \(\gamma =500\) and NTCTZNN1 with \(\gamma =1000\) are 0.0080 and 0.0040, respectively. From Fig. 2 we can see that the performance of NTCTZNN1 with \(\gamma =1000\) is better than that of NTCTZNN1 with \(\gamma =500\), which is in accordance with Theorem 3.1. In addition, the number of iterations of the NTCTZNN1 with \(\gamma =1000\) and the NTCTZNN1 with \(\gamma =1000\) are 6541 and 12,561, respectively, which are only about 4% of those generated by GNN. Therefore, NTCTZNN1 computational cost is reduced greatly though it is a little less accurate than GNN.
Thirdly, we use NTCTZNN2 to solve problem (28). The residual error \(\Vert \mathcal{E}(t) \Vert \) generated by NTCTZNN2 with \(\gamma _{1}= \gamma _{2}=10\) is displayed in Fig. 3, and the final residual error is \(1.5972\times 10^{4}\). In addition, the number of iterations of the NTCTZNN2 with \(\gamma =10\) is 373, which is much less than that of NTCTZNN1.
Generally speaking, large design parameter can enhance the efficiency of NTCTZNN. However, we also observed that NTCTZNN with large design parameter often takes more CPU time because the larger the design parameter is, the smaller the stepsize of ode45 is. In fact, the CPU times of NTCTZNN2 with \(\gamma _{1}=\gamma _{2}=10\) and NTCTZNN2 with \(\gamma _{1}=\gamma _{2}=100\) are 1.4219 and 1.8594, respectively. Comparing Figs. 2 and 3, we find that: (1) NTCTZNN2 with \(\gamma _{1}= \gamma _{2}=10\) is more efficient than NTCTZNN1 with \(\gamma =1000\), because the final residual error generated by NTCTZNN2 with \(\gamma _{1}=\gamma _{2}=10\) is about 10^{−4}, while the final residual error generated by NTCTZNN1 with \(\gamma =1000\) is about 10^{−3}; (2) the final residual error generated by NTCTZNN1 becomes stable quickly, but the final residual error generated by NTCTZNN2 is shrinking during the tested time period \([0,10]\), which is in accordance with Theorem 3.2. Overall, NTCTZNN2 can get more accurate solution but not increase the computational cost. The neural states \(\mathcal{X}(t)(1,1,1,1)\), \(\mathcal{X}(t)(1,2,1,2)\), \(\mathcal{X}(t)(2,1,2,1)\), \(\mathcal{X}(t)(2,2,2,2)\) computed by NTCTZNN2 are plotted in Fig. 4, which shows that the neural states converge to the corresponding entries of the theoretical solution. (Here the theoretical solution \(\mathcal{X}^{*}(t)\) is denoted by ∗dotted blue curves, and the neuralnetwork solutions are denoted by +dotted red curves.)
Fourthly, we use NTCTZNN3 with \(\gamma _{1}=\gamma _{2}=10\) to solve problem (28), and the activation function is set as the signbipower activation function [37]:
where \(\tilde{\eta }\in (0,1)\) and \(\operatorname{sig}^{\tilde{\eta }}(x)\) is defined as
We set \(\tilde{\eta }=0.5\), \(k=1\). The residual error \(\Vert \mathcal{E}(t) \Vert \) generated by NTCTZNN3 with \(\gamma _{1}=\gamma _{2}=2\) is displayed in Fig. 5, and the final residual error is \(9.8868\times 10^{5}\). Furthermore, its number of iterations is 1097. Comparing the accuracy and the number of iterations, we can find that NTCTZNN2 is more efficient than NTCTZNN3.
Overall, taking the two criteria, i.e., accuracy and number of iterations, into consideration, we can rank the proposed four neuralnetworks as follows:
in which \(a\succ b\) means that a is more efficient than b.
NTCTZNN13 and GNN are customized to solve TVSTEs, and they can also be exploited to solve static Sylvester tensor equations, which can be viewed as a special case of TVSTEs (i.e., \(\dot{\mathcal{A}}= \dot{\mathcal{B}}=\dot{\mathcal{C}}=0\)). For simplicity, we only apply NTCTZNN3 to solve the static Sylvester tensor equations, which is given in the following example.
Example 4.2
Let us consider a static Sylvester tensor equation with (\(m=n=2\), \(I_{1}=I_{2}=K_{1}=K_{2}=2\)):
where
We set the noise \(\eta =1\) and the initial state tensor \(\mathcal{X} _{0}=\mathcal{I}\). The numerical results generated by NTCTZNN3 with \(\gamma _{1}=\gamma _{2}=10\) are plotted in Fig. 6, and the final residual error is \(2.1473\times 10^{4}\).
As shown in Fig. 6, the neural state computed by NTCTZNN3 converges to the theoretical solution. In addition, the sequence \(\{ \Vert \mathcal{E}(t) \Vert \}\) of the residual errors converges to zero. These demonstrate the effectiveness of NTCTZNN3 for the static Sylvester tensor equations. Furthermore, the convergence property of GBB also needs to be investigated.
Conclusion
By following Zhang et al.’s design method, we have proposed three noisetolerant continuoustime Zhang neural networks (NTCTZNNs) and a gradientbased neural network (GNN) to solve the timevarying Sylvester tensor equations with noise, and have established their various convergence results. These complement some existing results. Numerical results substantiate the efficacy and superiority of the proposed NTCTZNNs.
It is possible to extend the ideas in this paper for other type tensor equations, such as timevarying periodic Sylvester tensor equations, or timevarying coupled Sylvester tensor equations. Furthermore, we will apply the designed neural networks to realize the pathtracking control of different robot manipulators in the future.
References
 1.
Einstein, A.: The foundation of the general theory of relativity. In: Kox, A.J., Klein, M.J., Schulmann, R. (eds.) The Collected Papers of Albert Einstein, vol. 6. Princeton University Press, Princeton (2007)
 2.
Haussühl, S.: Physical Properties of Crystals. Wiley, Weinheim (2007)
 3.
Brazell, M., Li, N., Navasca, C., Tamon, C.: Solving multilinear systems via tensor inversion. SIAM J. Matrix Anal. Appl. 34, 542–570 (2013)
 4.
Zhang, Y.N., Jiang, D.C., Wang, J.: A recurrent neural network for solving Sylvester equation with timevarying coefficients. IEEE Trans. Neural Netw. 13(5), 1053–1063 (2002)
 5.
Zhou, B., Duam, G.R., Lin, Z.: A parametric periodic Lyapunov equation with application in semiglobal stabilization of discretetime periodic systems subject to actuator saturation. Automatica 47, 316–325 (2011)
 6.
Wang, Q.W., He, Z.H.: Systems of coupled generalized Sylvester matrix equations. Automatica 50(11), 2840–2844 (2014)
 7.
Sun, M., Wang, Y.J.: The conjugate gradient methods for solving the generalized periodic Sylvester matrix equations. J. Appl. Math. Comput. 60, 413–434 (2019)
 8.
Sun, L.Z., Zheng, B.D., Bu, C.J., Wei, Y.M.: Moore–Penrose inverse of tensors via Einstein product. Linear Multilinear Algebra 64, 686–698 (2016)
 9.
Li, B.W., Sun, Y.S., Zhang, D.W.: Chebyshev collocation spectral methods for coupled radiation and conduction in a concentric spherical participating medium. J. Heat Transf. 131, 062701 (2009)
 10.
Li, B.W., Tian, S., Sun, Y.S., Mao, Z.: Schurdecomposition for 3D matrix equations and its application in solving radiative discrete ordinates equations discretized by Chebyshev collocation spectral method. J. Comput. Phys. 229, 1198–1212 (2010)
 11.
Ding, F., Chen, T.W.: Gradient based iterative algorithms for solving a class of matrix equations. IEEE Trans. Autom. Control 50(8), 1216–1221 (2005)
 12.
Wang, Q.W., Xu, X.J.: Iterative algorithms for solving some tensor equations. Linear Multilinear Algebra 67, 1325–1349 (2019)
 13.
Huang, B.H., Ma, C.F.: An iterative algorithm to solve the generalized Sylvester tensor equations. Linear Multilinear Algebra (2018). https://doi.org/10.1080/03081087.2018.1536732
 14.
Lv, L.L., Zhang, Z., Zhang, L., Wang, W.S.: An iterative algorithm for periodic Sylvester matrix equations. J. Ind. Manag. Optim. 14(1), 413–425 (2018)
 15.
Hajarian, M.: Generalized conjugate direction algorithm for solving the general coupled matrix equations over symmetric matrices. Numer. Algorithms 73(3), 591–609 (2016)
 16.
Hajarian, M.: New finite algorithm for solving the generalized nonhomogeneous Yakubovichtranspose matrix equation. Asian J. Control 19(1), 164–172 (2017)
 17.
Hajarian, M.: Computing symmetric solutions of general Sylvester matrix equations via Lanczos version of biconjugate residual algorithm. Comput. Math. Appl. 76(4), 686–700 (2018)
 18.
Lv, L.L., Zhang, Z.: On the periodic Sylvester equations and their applications in periodic Luenberger observers design. J. Franklin Inst. 353(5), 1005–1018 (2016)
 19.
Lv, L.L., Zhang, Z.: Finite iterative solutions to periodic Sylvester matrix equations. J. Franklin Inst. 354(5), 2358–2370 (2017)
 20.
Lv, L.L., Zhang, Z.: A parametric poles assignment algorithm for secondorder linear periodic systems. J. Franklin Inst. 354(8), 8057–8071 (2017)
 21.
Lv, L.L., Zhang, Z., Zhang, L.: A periodic observers synthesis approach for LDP systems based oniteration. IEEE Access 6, 8539–8546 (2018)
 22.
Lv, L.L., Zhang, Z., Zhang, L., Liu, X.X.: Gradient based approach for generalized discretetime periodic coupled Sylvester matrix equations. J. Franklin Inst. 355(15), 7691–7705 (2018)
 23.
Hajarian, M.: Matrix iterative methods for solving the Sylvestertranspose and periodic Sylvester matrix equations. J. Franklin Inst. 350(10), 3328–3341 (2013)
 24.
Hajarian, M.: Generalized reflexive and antireflexive solutions of the coupled Sylvester matrix equations via CD algorithm. J. Vib. Control 24(2), 343–356 (2016)
 25.
Hajarian, M.: Least squares solution of the linear operator equation. J. Optim. Theory Appl. 170(1), 205–219 (2016)
 26.
Hajarian, M.: Solving the general Sylvester discretetime periodic matrix equations via the gradient based iterative method. Appl. Math. Lett. 52, 87–95 (2016)
 27.
Sun, M., Wang, Y.J., Liu, J.: Two modified leastsquares iterative algorithms for the Lyapunov matrix equations. Adv. Differ. Equ. 2019, 305 (2019)
 28.
Guo, D.S., Lin, X.J., Su, Z.Z., Sun, S.B., Huang, Z.J.: Design and analysis of two discretetime ZD algorithms for timevarying nonlinear minimization. Numer. Algorithms 77(1), 23–36 (2018)
 29.
Sun, M., Tian, M.Y., Wang, Y.J.: Discretetime Zhang neural networks for timevarying nonlinear optimization. Discrete Dyn. Nat. Soc. 4745759, 1–14 (2019)
 30.
Sun, M., Wang, Y.J.: General fivestep discretetime Zhang neural network for timevarying nonlinear optimization. Bull. Malays. Math. Soc. (2019). https://doi.org/10.1007/s40840019007704
 31.
Zhang, Y.N., Li, Z.: Zhang neural network for online solution of timevarying convex quadratic program subject to timevarying linearequality constraints. Phys. Lett. A 373, 1639–1643 (2009)
 32.
Jin, L., Zhang, Y.N.: Discretetime Zhang neural network of \(\mathcal{O}(\tau ^{3})\) pattern for timevarying matrix pseudoinversion with application to manipulator motion generation. Neurocomputing 142, 165–173 (2014)
 33.
Sun, M., Liu, J.: General sixstep discretetime Zhang neural network for timevarying tensor absolute value equations. Discrete Dyn. Nat. Soc. (2019). Accepted
 34.
Brazell, M., Li, N., Navasca, C., Tamon, C.: Solving multilinear systems via tensor inversion. SIAM J. Matrix Anal. Appl. 34, 542–570 (2013)
 35.
Guo, D.S., Zhang, Y.N.: Zhang neural network versus gradientbased neural network for timevarying linear matrix equation solving. Neurocomputing 74, 3708–3712 (2011)
 36.
Jin, L., Zhang, Y.N.: Integrationenhanced Zhang neural network for realtimevarying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 27(12), 2615–2627 (2016)
 37.
Li, S., Chen, S., Liu, B.: Accelerating a recurrent neural network to finitetime convergence for solving timevarying Sylvester equation by using a signbipower activation function. Neural Process. Lett. 37, 1–17 (2015)
 38.
Yi, C.F., Chen, Y.H., Lu, Z.H.: Improved gradientbased neural networks for online solution of Lyapunov matrix equation. Inf. Process. Lett. 111, 780–786 (2011)
 39.
Ge, S.S., Lee, T.H., Harris, C.J.: Adaptive Neural Network Control of Robotic Manipulators. World Scientific, London (1998)
Acknowledgements
The authors thank two anonymous reviewers for their valuable comments and suggestions, which have helped them in improving the paper.
Funding
This work is supported by the National Natural Science Foundation of China and Shandong Province (No. 11671228, 11601475, ZR2016AL05), the Doctoral Initiation Fund of Zaozhuang University and the Qingtan Scholar Project of Zaozhang University.
Author information
Affiliations
Contributions
The first author provided the problem and gave the proof of the main results, and the second author finished the numerical experiment. All authors read and approved the final manuscript.
Corresponding author
Correspondence to Sun Min.
Ethics declarations
Competing interests
The authors declare that there are no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Min, S., Jing, L. Noisetolerant continuoustime Zhang neural networks for timevarying Sylvester tensor equations. Adv Differ Equ 2019, 465 (2019) doi:10.1186/s1366201924068
Received
Accepted
Published
DOI
Keywords
 Timevarying Sylvester tensor equations
 Noisetolerant continuoustime Zhang neural network
 Gradientbased neural network
 Global convergence