Mean-square exponential input-to-state stability of stochastic quaternion-valued neural networks with time-varying delays

In this paper, we first consider the stability problem for a class of stochastic quaternion-valued neural networks with time-varying delays. Next, we cannot explicitly decompose the quaternion-valued systems into equivalent real-valued systems; by using Lyapunov functional and stochastic analysis techniques, we can obtain sufficient conditions for mean-square exponential input-to-state stability of the quaternion-valued stochastic neural networks. Our results are completely new. Finally, a numerical example is given to illustrate the feasibility of our results.


Introduction
As is well known, the dynamic research on neural network models has achieved fruitful results, and it has been applied in pattern recognition, automatic control, signal processing and artificial intelligence. However, most neural network models proposed and discussed in the literature are deterministic. It has the characteristics of being simple and easy to analyze. In fact, for any actual system, there is always a variety of random factors. However, in real nervous systems and in the implementation of artificial neural networks, noise is unavoidable [1,2] and should be taken into consideration in modeling. Stochastic neural network is an artificial neural network and is used as a tool of artificial intelligence. Therefore, it is of practical importance to study the stochastic neural networks. The authors of [3] studied the stability of stochastic neural networks in 1996. Subsequently, some scholars carried out a lot of research work and made some progress [4][5][6][7]. Due to the finite switching speed of neurons and amplifiers, time delays inevitably exist in biological and artificial neural network models. In recent years, the study of the stability of delay stochastic neural networks has become a hot spot in many scholars [8][9][10][11][12][13][14][15]. It is well known that the external inputs can influence the dynamic behaviors of neural networks in practical applications. Therefore, it is significant to study the input-to-state stability problem in the field of stochastic neural networks [16][17][18][19].
Besides, the quaternion-valued neural network has been one of the most popular research hot spots, due to the storage capacity advantage compared to real-valued neural networks and complex-valued neural networks. It can be applied to the fields of robotics, attitude control of satellites, computer graphics, ensemble control, color night vision and image impression [20,21]. The skew field of quaternion by where q R , q I , q J , q K are real numbers, the three imaginary units i, j and k obey Hamilton's multiplication rules: Since quaternion-valued neural networks were proposed, the study of quaternion-valued neural networks has received much attention of many scholars, and some results have been obtained for the stability ( [22][23][24][25][26][27]), dissipativity ( [28,29]), input-to-state stability [30] and anti-periodic solutions [31] for the quaternion-valued neural networks. Very recently, many scholars considered the problem of robust stability for stochastic complexvalued neural networks [32,33]. Subsequently, some scholars considered the problem of stability for stochastic quaternion-valued neural networks [34,35]. However, to the best of our knowledge, till now there is still no result about the mean-square exponential inputto-state stability analysis for the stochastic quaternion-valued neural networks by direct method. So it is a challenging and important problem in theories and applications. With the inspiration from the previous research, in order to fill the gap in the research field of quaternion-valued stochastic neural networks, the work of this paper comes from two main motivations. (1) The stability criterion is the mean-square exponential input-tostate stability, which is more general than the traditional mean-square exponential stability. In the past decade, many authors studied the input-to-state stability analysis for a class of stochastic delayed neural networks [16][17][18][19]. (2) Recently, little literature [34,35] had studied the square-mean stability of quaternion-valued stochastic neural networks, thus it is worth studying the mean-square exponential input-to-state stability of the quaternionvalued stochastic neural networks by direct method.
Motivated by the above statement, in this paper, we consider the following stochastic quaternion-valued neural network: where l ∈ {1, 2, . . . , n} =: N , n is the number of neurons in layers; z l (t) ∈ Q is the state of the pth neuron at time t; a l (t) > 0 is the self-feedback connection weight; b lk (t) and c lk (t) ∈ Q are, respectively, the connection weight and the delay connection weight from neuron k to neuron l; θ lk (t) and η lk (t) are the transmission delays; f k , g k : Q → Q are the activation functions; U(t) = (U 1 (t), U 2 (t), . . . , U n (t)) belongs to ∞ , where ∞ denotes the class of essentially bounded functions U from R + to Q n with U ∞ = esssup t≥0 U(t) Q < ∞; B(t) = (B 1 (t), B 2 (t), . . . , B n (t)) T is an n-dimensional Brownian motion defined on a complete probability space space; σ lk : Q → Q is a Borel measurable function; σ = (σ lk ) n×n is the diffusion coefficient matrix. For every z ∈ Q, the conjugate transpose of z is defined as z * = z Riz Ijz Jkz K , and the norm of z is defined as For every z = (z 1 , z 2 , . . . , z n ) T ∈ Q n , we define z Q n = max l∈N { z l Q }. For convenience, we will adopt the following notation: The initial conditions of the system (1.1) is of the form Comparing the previous results, we have the following advantages: Firstly, this is the first time to study this problem, and it fills the gap in the field of stochastic quaternion-valued neural networks. Secondly, quaternion-valued stochastic neural network (1.1) contains real-valued stochastic neural networks and complex-valued stochastic neural networks as its special cases, the main results of this paper are new and more general than those in the existing quaternion-valued neural network models in the literature. Thirdly, unlike some previous studies of quaternion-valued stochastic neural networks, we do not decompose the systems under consideration into real-valued systems, but rather directly study quaternion-valued stochastic systems. Finally, our method of this paper can be used to study the mean-square exponential input-to-state stability for other types of quaternionvalued stochastic neural networks.
This paper is organized as follows: In Sect. 2, we introduce some definitions and state some preliminary results which are needed in later sections. In Sect. 3, we establish some sufficient conditions for the mean-square exponential input-to-state stability of system (1.1). In Sect. 4, we give an example to demonstrate the feasibility of our results. Finally, we draw a conclusion in Sect. 5.

Preliminaries and basic knowledge
In this section, we introduce the quaternion version Itô formula and the definition of the mean-square exponential input-to-state stability.
Let ( , F, {F t } t≥0 , P) be a complete probability space with a natural filtration {F t } t≥0 satisfying the usual conditions (i.e., it is right continuous, and F 0 contains all P-null sets). Denote by where E denotes the expectation of the stochastic process.

Definition 2.1
Consider an n-dimensional quaternion-valued stochastic differential equation: where const represents constant. Let C 1,2 (R + × Q n , R + ) denote the family of all nonnegative functions V (t, z) on R + × Q n , which are once continuously differentiable in t and twice differentiable in z R , z I , z J and z K . Then, for V ∈ C 1,2 (R + × Q n , R + ), the quaternion version of the Itô formula takes the following form: where f (t) = f (t, z(t), z(tθ (t))), g(t) = g(t, z(t), z(tη(t))), and operator LV (t, z) is defined as

Definition 2.2
The trivial solution of system (1.1) is mean-square exponentially inputto-state stable, if there exist constants λ > 0, M 1 > 0 and M 2 > 0 such that Throughout the rest of the paper, we assume that: (H 1 ) There exist positive constants L f k , L g k , L σ lk such that for any x, y ∈ Q, (H 2 ) For l ∈ N , there exist positive constants λ and ξ k such that where γ = sup t∈R {θ lk (t)}, β = sup t∈R {η lk (t)}.

Mean-square exponential input-to-state stability
In this section, we will consider the mean-square exponential input-to-state stability of system (1.1).

Theorem 3.1 Suppose that Assumptions (H 1 )-(H 2 ) are satisfied, then, for any initial value of the dynamical system (1.1), there exists a trivial solution z(t), which is mean-square exponentially input-to-state stable.
Proof Let σ (t) = (σ lk (t)) n×n , where σ lk (t) = σ lk (z k (tη lk (t))). We consider the Lyapunov function as follows: Then, by the Itô formula, we have the following stochastic differential: where V z (t, z(t)) = ( ∂V (t,z(t)) ∂z 1 , . . . , ∂V (t,z(t)) ∂z n ), and L is the weak infinitesimal operator such that c lk (t)g k z k tθ kl (t) * + U * l (t) + e λt ξ l z * l (t) -a l (t)z l (t) From (H 2 ), we easily derive Now, similar to the previous literature, we define the stopping time (or Markov time) k := inf{s ≥ 0 : |z(s)| ≥ k}, and by the Dynkin formula, we have Letting k → ∞ on both sides (3.3), from the monotone convergence theorem, we can get On the other hand, it follows from the definition of V (t, z(t)) that Combining (3.4) and (3.5), the following holds: which together with Definition 2.2 verifies that trivial solution of system (1.1) is meansquare exponentially input-to-state stable. The proof is complete.
Remark 3.1 In the calculation process of Theorem 3.1, by using stochastic analysis theory and the Itô formula, we obtain the mean-square exponential input-to-state stability of system (1.1).
Remark 3.2 Theorem 3.1 can be applied to stability criteria for the considered stochastic network models by employing a non-decomposing method.

Illustrative example
In this section, we give an example to illustrate the feasibility and effectiveness of our results obtained in Sect. 3.
Example 4.1 Let n = 3. Consider the following quaternion-valued stochastic neural network: where l = 1, 2, 3, the coefficients are follows: Through simple calculations, we have We can check that other conditions of Theorem 3.1 are satisfied. So, we know that a trivial solution of system (4.1) is mean-square exponentially input-to-state stable (see Figs. 1-4).
Remark 4.1 By using the Simulink toolbox in MATLAB, Figs. 1-8 show the time revolution of four parts of z 1 , z 2 , respectively. When U l (t) = 0, our results will conclude the traditionally mean-square exponential stability for the considered stochastic neural networks. Remark 4.2 Quaternion-valued stochastic system includes real-valued stochastic system as its special cases. In fact, in Example 4.1, if all the coefficients are functions from R to R, and all the activation functions are functions from R to R, then the state z l (t) ≡ z R l (t) ∈ R, in this case, systems (4.1) is stochastic real-valued system. Then, similar to the proofs of 3.1 under the same corresponding conditions, one can show that a similar result to Theorem 3.1 is still valid (see [16][17][18][19]).

Conclusion
In this paper, we consider the problem of the mean-square exponential input-to-state stability for the quaternion-valued stochastic neural networks by direct method. Then, by constructing an appropriate Lyapunov functional, stochastic analysis theory and the Itô formula, a novel sufficient condition has been derived to ensure the mean-square exponential input-to-state stability for the considered stochastic neural networks. In order to demonstrate the usefulness of the presented results, a numerical example is given. This paper improves and extends the old results in the literature [34,35], and proposes a good research framework to study the mean-square exponential input-to-state stability of quaternion-valued stochastic neural networks with time-varying delays. We expect to extend this work to the study of other types of stochastic neural networks.