Skip to main content

On the averaging principle for SDEs driven by G-Brownian motion with non-Lipschitz coefficients

Abstract

In this paper, we aim to develop the averaging principle for stochastic differential equations driven by G-Brownian motion (G-SDEs for short) with non-Lipschitz coefficients. By the properties of G-Brownian motion and stochastic inequality, we prove that the solution of the averaged G-SDEs converges to that of the standard one in the mean-square sense and also in capacity. Finally, two examples are presented to illustrate our theory.

Introduction

The averaging principle for a dynamical system is important in problems of mechanics, control, and many other areas. As is known to all, a lot of problems in theory of differential systems can be solved effectively by the averaging principle. The first rigorous results were obtained by Bogoliubov and Mitropolsky [3], and further developments were studied by Hale [9]. With the developing of stochastic analysis theory, many authors began to study the averaging principle for differential systems with perturbations and extended the averaging theory to the case of stochastic differential equations (SDEs). We refer the reader to Bao et al. [2], Chen et al. [5], Golec and Ladde [8], Khasminskii [11, 12], Liptser and Stoyanov [14], Liu et al. [15], Stoyanov and Bainov [23], Wu and Yin [25], Xu et al. [28, 29], Xu and Miao [26, 27], Luo et al. [16], and the references therein.

On the other hand, for the potential applications in uncertainty problems, risk measures, and the superhedging in finance, the theory of nonlinear expectation has been developed. Peng [20] established a framework of G-expectation theory and G-Brownian motion. Denis et al. [6] obtained some basic and important properties of several typical Banach spaces of functions of G-Brownian motion paths induced by G-expectation. After that, the theory of G-SDEs has drawn increasing attention and has been studied subsequently by many authors. For example, Gao et al. [7] investigated the existence of the solution and large deviations for G-SDEs. Hu et al. [10] studied the regularity of the solution for the backward SDEs driven by G-Brownian motion. Luo and Wang [17] studied the sample solution of G-SDEs and obtained a new kind of comparison theorem. In the G-framework, Zhang and Chen [31] considered the quasi-sure exponential stability of semi-linear G-SDEs. By means of G-Lyapunov function method, Li et al. [13], Ren et al. [22], and Yin et al. [30] established the moment stability and the quasi-sure stability for nonlinear G-SDEs.

Compared with classical Brownian motion, the structure of G-Brownian motion is very complex. G-Brownian motion is not defined on a probability space but on the G-expectation space. A natural question is as follows: Is there an averaging principle for SDEs driven by G-Brownian motion? In this paper, we shall investigate the averaging principle of nonlinear G-SDEs

$$\begin{aligned} dx(t)=f\bigl(t,x(t)\bigr)\,dt+h\bigl(t,x(t)\bigr)\,d \langle B\rangle _{t}+g\bigl(t,x(t)\bigr)\,dB_{t}, \end{aligned}$$
(1.1)

where \(B_{t}\) is one-dimensional G-Brownian motion, \(\langle B\rangle _{t}\) is the quadratic variation process of the G-Brownian motion \(B_{t}\). Our main objective is to show that the solution of the averaged equation converges to that of the standard equation in the sense of mean square and capacity.

It is worth noting that most existing works of research on the averaging principle of SDEs require that the coefficients of SDEs are global Lipschitz continuous. In fact, the global Lipschitz condition imposed on [5, 8, 11, 14, 16, 23, 25, 28] is quite strong when one discusses practical applications in the real world. For the past few years, many scholars have devoted themselves to finding some weaker conditions to study the averaging principle of stochastic system. Recently, some works on the averaging principle of stochastic system (see [18, 24, 29]) have been obtained under the Yamada–Watanabe condition: For any \(x,y\in R^{n}\) and \(t\ge 0\),

$$\begin{aligned} \bigl\vert f(t,x)-f(t,y) \bigr\vert ^{2}\vee \bigl\vert g(t,x)-g(t,y) \bigr\vert ^{2} \le & k\bigl( \vert x-y \vert \bigr), \end{aligned}$$
(1.2)

where \(k(\cdot )\) is a continuous increasing concave function from \(R^{+}\) to \(R^{+}\) such that \(k(0)=0\) and \(\int _{0^{+}}\frac{1}{k(x)}\,dx=\infty \). However, this condition is somewhat restrictive because it does require that the control function \(k(x)\) of the modules of the continuity of the coefficients is concave, while this restriction excludes Eq. (4.4) of Example 4.3. In fact, \(f(t,x)\) of (4.4) does not satisfy condition (1.2) because \(\log x^{-1}<(\log x^{-1})^{2}\) for \(x\le \eta \). In this paper, we use the non-Lipschitz condition which arose in the study of the Brownian motion on the group of diffeomorphisms of the circle [1] to study the averaging principle for Eq. (1.1). Compared with (1.2), one will find that in our paper the coefficients f, h, and g of Eq. (1.1) are not assumed to be controlled by the concave functions. Thus, the conditions here are weaker than those of [18, 24, 29], and some results in [18, 24, 29] are generalized and improved.

The rest of this paper is organized as follows. In Sect. 2, we introduce some preliminaries about G-Brownian motion. In Sect. 3, we establish the stochastic averaging principle of Eq. (1.1). By the Burkholder–Davis–Gundy inequality and some useful lemmas to be established, we prove that the solution of the averaged equation will converge to that of the standard equation in the sense of mean square and capacity. Finally, two illustrative examples are given in Sect. 4.

Preliminaries

Let us first recall some basic definitions and lemmas about G-Brownian motions. For more details, please see, e.g., [6, 7, 20].

Let Ω be a given nonempty set, and let \(\mathcal{H}\) be a linear space of real-valued functions defined on Ω. We assume that \(\mathcal{H}\) satisfies that \(c\in \mathcal{H}\) for any constant c and \(\vert X\vert \in \mathcal{H}\) for all \(X\in \mathcal{H}\).

Definition 2.1

A sublinear expectation \(\mathbb{E}\) is a functional \(\mathbb{E}: \mathcal{H}\to R\) satisfying

  1. (i)

    Monotonicity: \(\mathbb{E}[X]\ge \mathbb{E}[Y]\) if \(X\ge Y\).

  2. (ii)

    Constant preserving: \(\mathbb{E}[C]=C\) for all \(C\in R\).

  3. (iii)

    Sub-additivity: \(\mathbb{E}[X+Y]\le \mathbb{E}[X]+\mathbb{E}[Y]\).

  4. (iv)

    Positive homogeneity: \(\mathbb{E}[\lambda X]=\lambda \mathbb{E}[X]\) for all \(\lambda \ge 0\).

The triple \((\Omega ,\mathcal{H},\mathbb{E})\) is called a sublinear expectation space.

Definition 2.2

A random variable X on a sublinear expectation space \((\Omega ,\mathcal{H},\mathbb{E})\) is called G-normal distributed if

$$\begin{aligned} {aX+b\bar{X}} \stackrel{d}{=} \sqrt{a^{2}+b^{2}}X\quad \text{for }a,b\ge 0, \end{aligned}$$

where is an independent copy of X, \(y\stackrel{d}{=} z\) means y and z are identically distributed.

Definition 2.3

A process \(\{B_{t}\}_{t\ge 0}\) on a sublinear expectation space \((\Omega ,\mathcal{H},\mathbb{E})\) is called a G-Brownian motion if the following conditions are satisfied:

  1. (i)

    \(B_{0}=0\).

  2. (ii)

    For any \(t,s\ge 0\), the increment \(B_{t+s}-B_{t}\) is G-normal distributed.

  3. (iii)

    For any \(n\ge 1\), \(0=t_{0}\le t_{1}\le \cdots \le t_{n}< \infty \), the increment \(B_{t_{n}}-B_{t_{n}-1}\) is independent of \(B_{t_{1}},B_{t_{2}},\ldots , B_{t_{n-1}}\).

Now, let \(\Omega =C_{0}(R^{+})\) be the space of all real-valued continuous paths \((w_{t})_{t\ge 0}\) with \(w_{0}=0\) equipped with the distance

$$ \rho \bigl(w^{1},w^{2}\bigr)=\sum _{k=1}^{\infty }2^{-k} \Bigl(\Bigl(\max _{t\in [0,k]} \bigl\vert w_{t}^{1}-w_{t}^{2} \bigr\vert \Bigr) \wedge 1 \Bigr),\quad w^{1},w^{2}\in \Omega . $$

Consider the canonical process \(B_{t}(w)=(w_{t})_{t\ge 0}\). For any \(T\ge 0\), we define

$$ L_{ip}(\Omega _{T}):=\bigl\{ \varphi (B_{t_{1}},B_{t_{2}}, \ldots , B_{t_{n}}): n\in \mathbb{N}, t_{1},t_{2}, \ldots ,t_{n}\in [0,T], \varphi \in C_{b,Lip} \bigl(R^{n}\bigr) \bigr\} $$

and \(L_{ip}(\Omega ):=\bigcup_{n=1}^{\infty }L_{ip}(\Omega _{n})\), where \(C_{b,Lip}(R^{n})\) is the space of all bounded, real-valued, and Lipschitz continuous functions on \(R^{n}\). Peng [20] defined the sublinear expectation \(\hat{\mathbb{E}}\) on \((\Omega ,L_{ip}(\Omega ))\) so that the canonical process \(B_{t}\) is a G-Brownian motion. This sublinear expectation is known as a G-expectation. For each \(p\ge 1\), \(L_{G}^{p}(\Omega )\) denotes the completion of \(L_{ip}(\Omega )\) under the norm \(\Vert \cdot \Vert _{p}=(\hat{\mathbb{E}}\vert \cdot \vert ^{p})^{\frac{1}{p}}\).

For a given pair of \(T>0\) and \(p\ge 1\), define

$$ M_{G}^{p,0}(0,T)= \Biggl\{ \eta :\eta _{t}(\omega )=\sum_{i=0}^{N-1}\xi _{i}( \omega )I_{[t_{i},t_{i+1}}(t), \xi _{i}\in L_{G}^{p}( \Omega _{t_{i}}), 0=t_{0}< \cdots < t_{N}=T,N\ge 1 \Biggr\} . $$

Denote by \(M_{G}^{p}(0,T)\) the completion of \(M_{G}^{p,0}(0,T)\) under the norm

$$ \Vert \eta \Vert _{M_{G}^{p}(0,T)}:= \biggl(\hat{\mathbb{E}} \int _{0}^{T} \vert \eta _{t} \vert ^{p}\,dt \biggr)^{\frac{1}{p}}. $$

Definition 2.4

For each \(\eta \in M_{G}^{p,0}(0,T)\), the Bochner integral and Itô integral are defined by

$$ \int _{0}^{T}\eta _{t}\,dt=\sum _{i=0}^{N-1}\xi _{i}(t_{i+1}-t_{i}) \quad \text{and}\quad \int _{0}^{T}\eta _{t} \,dB_{t}=\sum_{i=0}^{N-1}\xi _{i}(B_{t_{i+1}}-B_{t_{i}}), $$

respectively.

Definition 2.5

([20])

Let \(\pi _{t}^{N}=\{t_{0}^{N},t_{1}^{N},\ldots ,t_{N}^{N}\}\), \(N=1,2,\ldots \) , be a sequence of partitions of \([0,t]\). For the G-Brownian motion, we define the quadratic variation process of \(B_{t}\) by

$$ \langle B\rangle _{t}=\lim_{u(\pi _{t}^{N})\to 0}\sum _{i=0}^{N-1} (B_{t_{i+1}^{N}}-B_{t_{i}^{N}} )^{2}=B_{t}^{2}-2 \int _{0}^{t}B_{s}\,dB_{s}, $$

where \(u(\pi _{t}^{N})=\max_{1\le i\le N}\vert t_{i+1}-t_{i}\vert \to 0\) as \(N\to \infty \).

Definition 2.6

Define a mapping \(M_{G}^{1,0}(0,T)\vert \to L_{G}^{1}(\Omega _{T})\):

$$ Q_{0,T}(\eta )= \int _{0}^{T}\eta _{t}\,d\langle B \rangle _{t}:=\sum_{i=0}^{N-1} \xi _{i}\bigl(\langle B\rangle _{t_{i+1}}-\langle B\rangle _{t_{i}}\bigr). $$

Then \(Q_{0,T}\) can be uniquely extended to \(M_{G}^{1}(0,T)\). We still denote this mapping by

$$ Q_{0,T}(\eta )= \int _{0}^{T}\eta _{t}\,d\langle B \rangle _{t},\quad \eta \in M_{G}^{1}(0,T). $$

Lemma 2.7

([20])

Let \(p\ge 1\) and \(\eta _{t}\in M_{G}^{p}([0,T])\), then we have

$$\begin{aligned} \hat{\mathbb{E}} \biggl( \int _{0}^{T} \vert \eta _{t} \vert ^{p}\,dt \biggr)\le \int _{0}^{T} \hat{\mathbb{E}} \vert \eta _{t} \vert ^{p}\,dt \end{aligned}$$

and

$$\begin{aligned} \hat{\mathbb{E}} \biggl(\sup_{0\le t\le T} \biggl\vert \int _{0}^{t}\eta _{s}\,d \langle B \rangle _{s} \biggr\vert ^{p} \biggr)\le \bar{\sigma }^{2p} T^{p-1} \int _{0}^{T} \hat{\mathbb{E}} \vert \eta _{t} \vert ^{p}\,dt, \end{aligned}$$

where \(\bar{\sigma }^{2}=\hat{\mathbb{E}}(X^{2})\).

Lemma 2.8

([20])

Let \(p\ge 1\) and \(\eta _{t}\in M_{G}^{2}([0,T])\), then we have

$$\begin{aligned} \hat{\mathbb{E}} \biggl(\sup_{0\le t\le T} \biggl\vert \int _{0}^{t}\eta _{s} \,dB_{s} \biggr\vert ^{p} \biggr)\le C_{p}\hat{ \mathbb{E}} \biggl( \biggl\vert \int _{0}^{T} \vert \eta _{t} \vert ^{2}\,d \langle B\rangle _{t} \biggr\vert ^{\frac{p}{2}} \biggr). \end{aligned}$$

Let \(\mathcal{B}(\Omega )\) be a Borel σ-algebra of Ω. It was proved in [6] that there exists a weakly compact family \(\mathcal{P}\) of probability measures defined on \((\Omega ,\mathcal{B}(\Omega ))\) such that

$$ \hat{\mathbb{E}}(X)=\sup_{\mathbb{P}\in \mathcal{P}}E_{\mathbb{P}}(X), \quad \forall X\in L_{ip}(\Omega ). $$

Definition 2.9

The capacity \(\hat{\mathbb{C}}(\cdot )\) associated with \(\mathcal{P}\) is defined by \(\hat{\mathbb{C}}(A)=\sup_{\mathbb{P}\in \mathcal{P}}\mathbb{P}(A)\), \(A\in \mathcal{B}(\Omega )\). A set \(A\subset \Omega \) is called polar if \(\hat{\mathbb{C}}(A)=0\). A property is said to hold quasi-surely (q.s.) if it holds outside a polar set.

Lemma 2.10

([20])

Let \(X\in L_{G}^{p}\) and \(\hat{\mathbb{E}}\vert X\vert ^{p}<\infty\) (\(p>0\)). Then, for any \(\delta >0\), we have

$$\begin{aligned} \hat{\mathbb{C}}\bigl( \vert X \vert >\delta \bigr)\le \frac{\hat{\mathbb{E}} \vert X \vert ^{p}}{\delta ^{p}}. \end{aligned}$$

Stochastic averaging principle

In this section, we study the averaging principle of G-SDEs. Let us consider the standard form of Eq. (1.1):

$$\begin{aligned} x_{\varepsilon }(t) =&x_{\varepsilon }(0)+\varepsilon \int _{0}^{t}f\bigl(s,x_{ \varepsilon }(s)\bigr) \,ds+\sqrt{\varepsilon } \int _{0}^{t}h\bigl(s,x_{\varepsilon }(s)\bigr)\,d \langle B\rangle _{s} \\ &{}+\sqrt{\varepsilon } \int _{0}^{t}g\bigl(s,x_{\varepsilon }(s)\bigr) \,dB_{s}, \quad \textit{q.s.}, \end{aligned}$$
(3.1)

with the initial condition \(x_{\varepsilon }(0)=x_{0}\in R^{n}\). Here, \(f,h,g: R_{+}\times R^{n}\to R^{n}\) are given functions and \(\varepsilon \in [0,\varepsilon _{0}]\) is a positive small parameter with \(\varepsilon _{0}\) being a fixed number.

In this paper, the following hypotheses are imposed on the coefficients f, h, and g.

Assumption 3.1

For any \(x,y\in R^{n}\) and \(t\ge 0\),

$$\begin{aligned} \bigl\vert f(t,x)-f(t,y) \bigr\vert + \bigl\vert h(t,x)-h(t,y) \bigr\vert \le & L \vert x-y \vert k_{1}\bigl( \vert x-y \vert \bigr) \end{aligned}$$
(3.2)

and

$$\begin{aligned} \bigl\vert g(t,x)-g(t,y) \bigr\vert ^{2} \le & L \vert x-y \vert ^{2}k_{2}\bigl( \vert x-y \vert \bigr), \end{aligned}$$
(3.3)

where L is a positive constant and \(k_{i}(\cdot )\) are two positive continuous functions bounded on \([1,\infty )\) such that

$$\begin{aligned} \lim_{x\downarrow 0}\frac{k_{i}(x)}{\log x^{-1}}=\zeta _{i}< \infty , \quad i=1,2. \end{aligned}$$
(3.4)

Assumption 3.2

There exists a positive constant K such that

$$\begin{aligned} \sup_{t\ge 0} \bigl\{ \bigl\vert f(t,0) \bigr\vert \vee \bigl\vert h(t,0) \bigr\vert \vee \bigl\vert g(t,0) \bigr\vert \bigr\} \le & K. \end{aligned}$$
(3.5)

Let \(\bar{f},\bar{h},\bar{g}: R^{n}\to R^{n}\) be three functions, satisfying (3.2), (3.3), and (3.5) with respect to x. We also assume that the following condition is satisfied.

Assumption 3.3

There exists a positive bounded function ψ on \(R_{+}\) such that \(\lim_{T_{1}\to \infty }\psi (T_{1})=0\) and

$$\begin{aligned}& \frac{1}{T_{1}} \int _{0}^{T_{1}} \bigl\vert f(t,x)-\bar{f}(x) \bigr\vert ^{2}\,dt+ \frac{1}{T_{1}} \int _{0}^{T_{1}} \bigl\vert h(t,x)-\bar{h}(x) \bigr\vert ^{2}\,dt \\& \quad {} + \frac{1}{T_{1}} \int _{0}^{T_{1}} \bigl\vert g(t,x)-\bar{g}(x) \bigr\vert ^{2}\,dt \le \psi (T_{1}) \bigl(1+ \vert x \vert ^{2}\bigr) \end{aligned}$$

for \(T_{1}>0\), \(x\in R^{n}\).

Then we have the averaging form of Eq. (3.1)

$$\begin{aligned} y_{\varepsilon }(t) =&y_{\varepsilon }(0)+\varepsilon \int _{0}^{t} \bar{f}\bigl(y_{\varepsilon }(s) \bigr)\,ds+\sqrt{\varepsilon } \int _{0}^{t}\bar{h}\bigl(y_{ \varepsilon }(s)\bigr) \,d\langle B\rangle _{s} \\ &{}+\sqrt{\varepsilon } \int _{0}^{t} \bar{g}\bigl(y_{\varepsilon }(s) \bigr)\,dB_{s}, \quad \textit{q.s.}, \end{aligned}$$
(3.6)

where \(y_{\varepsilon }(0)=x_{0}\).

Remark 3.4

Under Assumptions 3.13.2, it is easy to conclude that the standard SDEs driven by G-Brownian motion (3.1) and the averaged one (3.6) have a unique solution, respectively (see Qiao [21]).

Now, we present our main results which are used for revealing the relationship between the processes \(x_{\varepsilon }(t)\) and \(y_{\varepsilon }(t)\).

Theorem 3.5

Let Assumptions 3.13.3hold. For a given arbitrary small number \(\delta _{1}>0\) and arbitrary constants \(\bar{L}>0\), \(\beta \in (\frac{1}{2},1)\), there exists a number \(\varepsilon _{1}\in (0,\varepsilon _{0}]\) such that, for all \(\varepsilon \in (0,\varepsilon _{1}]\),

$$\begin{aligned} E \Bigl(\sup_{t\in [0,\bar{L}\varepsilon ^{\frac{1}{2}-\beta }]} \bigl\vert x_{ \varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2} \Bigr)\le \delta _{1}. \end{aligned}$$
(3.7)

In order to prove our main result, we need to introduce the following lemmas.

Lemma 3.6

([19])

Let \(p\ge 2\) and \(a,b>0\). Then, for \(\epsilon >0\),

$$\begin{aligned} a^{p-1}b\le \frac{\epsilon (p-1)}{p}a^{p}+ \frac{1}{p\epsilon ^{p-1}}b^{p} \end{aligned}$$
(3.8)

and

$$\begin{aligned} a^{p-2}b^{2}\le \frac{\epsilon (p-2)}{p}a^{p}+ \frac{2}{p\epsilon ^{\frac{p-2}{2}}}b^{p}. \end{aligned}$$
(3.9)

Lemma 3.7

([4])

Let ρ be a concave nondecreasing continuous function on \(R_{+}\) such that \(\rho (0)=0\). Then \(\gamma (x)=\rho ^{r}(x^{\frac{1}{s}})\) is also a concave nondecreasing continuous function on \(R_{+}\) with \(\gamma (0)=0\) for all \(s\ge r\ge 1\).

In what follows, \(C>0\) is a constant which can change its value from line to line.

Lemma 3.8

Let Assumptions 3.1and 3.2hold. Then, for every \(p\ge 2\),

$$\begin{aligned} \hat{\mathbb{E}} \bigl\vert y_{\varepsilon }(t) \bigr\vert ^{p}< \infty \quad \textit{on }t \ge 0. \end{aligned}$$
(3.10)

Proof

By the G-Itô formula [20], we have

$$\begin{aligned} \bigl\vert y_{\varepsilon }(t) \bigr\vert ^{p} =& \bigl\vert y_{\varepsilon }(0) \bigr\vert ^{p}+p\varepsilon \int _{0}^{t} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2} \bigl(y_{\varepsilon }(s)\bigr)^{\top } \bar{f}\bigl(y_{\varepsilon }(s)\bigr)\,ds \\ &{}+p\sqrt{\varepsilon } \int _{0}^{t} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2}\bigl(y_{ \varepsilon }(s)\bigr)^{\top }\bar{g} \bigl(y_{\varepsilon }(s)\bigr)\,dB_{s}+ \int _{0}^{t} \biggl(p\sqrt{\varepsilon } \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2}\bigl(y_{\varepsilon }(s) \bigr)^{ \top }\bar{h}\bigl(y_{\varepsilon }(s)\bigr) \\ &{}+\frac{p(p-1)}{2} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2} \bigl\vert \sqrt{\varepsilon } \bar{g}\bigl(y_{\varepsilon }(s) \bigr) \bigr\vert ^{2} \biggr)\,d\langle B\rangle _{s}, \quad \textit{q.s.} \end{aligned}$$
(3.11)

Let \(T>0\) be arbitrary. For any \(t_{1}\in [0,T]\), taking the G-expectation on both sides of (3.11), one gets

$$\begin{aligned} \hat{\mathbb{E}}\sup_{{0} \le t\le t_{1}} \bigl\vert y_{\varepsilon }(t) \bigr\vert ^{p} \le & \vert x_{0} \vert ^{p}+p\varepsilon \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p-2}\bigl(y_{\varepsilon }(s)\bigr)^{\top } \bar{f} \bigl(y_{ \varepsilon }(s)\bigr)\,ds \\ &{}+p\sqrt{\varepsilon }\hat{\mathbb{E}}\sup_{{0} \le t\le t_{1}} \int _{0}^{t} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2}\bigl(y_{\varepsilon }(s)\bigr)^{\top } \bar{g} \bigl(y_{\varepsilon }(s)\bigr)\,dB_{s} \\ &{}+\hat{\mathbb{E}}\sup_{{0} \le t\le t_{1}} \int _{0}^{t} \biggl(p\sqrt{\varepsilon } \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2}\bigl(y_{\varepsilon }(s) \bigr)^{ \top } \bar{h}\bigl(y_{\varepsilon }(s)\bigr) \\ &{}+\frac{p(p-1)}{2} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2} \bigl\vert \sqrt{\varepsilon } \bar{g}\bigl(y_{\varepsilon }(s) \bigr) \bigr\vert ^{2} \biggr)\,d\langle B\rangle _{s}= \vert x_{0} \vert ^{p}+ \sum _{i=1}^{3}I_{i}. \end{aligned}$$
(3.12)

By (3.8) with \(\epsilon =1\), we have

$$\begin{aligned} I_{1} \le & p\varepsilon \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p-1} \bigl\vert \bar{f}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert \,ds \le \varepsilon \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl((p-1) \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}+ \bigl\vert \bar{f} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{p} \bigr)\,ds. \end{aligned}$$

Further, using the basic inequality \(\vert a+b\vert ^{p}\le 2^{p-1}(\vert a\vert ^{p}+\vert b\vert ^{p})\) and Assumptions 3.13.2, we get

$$\begin{aligned} \bigl\vert \bar{f}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{p} \le & 2^{p-1} \bigl( \bigl\vert \bar{f}\bigl(y_{ \varepsilon }(s)\bigr)- \bar{f}(0) \bigr\vert ^{p}+ \bigl\vert \bar{f}(0) \bigr\vert ^{p} \bigr)\le 2^{p-1} \bigl(L^{p} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}k_{1}^{p} \bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr)+K^{p} \bigr). \end{aligned}$$

Then

$$\begin{aligned} I_{1} \le & \varepsilon (p-1)\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}\,ds + \varepsilon 2^{p-1}L^{p}\hat{ \mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}k_{1}^{p}\bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr)\,ds + \varepsilon 2^{p-1}K^{p}T. \end{aligned}$$
(3.13)

For the term \(I_{2}\). By Lemmas 2.72.8 and the Young inequality, we have

$$\begin{aligned} I_{2} \le &\hat{\mathbb{E}}\sup_{{0} \le t\le t_{1}} \biggl\vert \int _{0}^{t}p\sqrt{\varepsilon } \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2}\bigl(y_{ \varepsilon }(s) \bigr)^{\top } \bar{g}\bigl(y_{\varepsilon }(s)\bigr)\,dB_{s} \biggr\vert \\ \le &pC\hat{\mathbb{E}} \biggl[ \int _{0}^{t_{1}}\varepsilon \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{2p-2} \bigl\vert \bar{g} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2}\,d\langle B \rangle _{s} \biggr]^{\frac{1}{2}} \\ \le &pC\hat{\mathbb{E}} \biggl[\sup_{0 \le s\le t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p} \int _{0}^{t_{1}}\varepsilon \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2} \bigl\vert \bar{g} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2}\,d\langle B\rangle _{s} \biggr]^{ \frac{1}{2}} \\ \le &\frac{1}{2}\hat{\mathbb{E}}\sup_{0 \le s\le t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}+pC\varepsilon \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p-2} \bigl\vert \bar{g}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2}\,d\langle B \rangle _{s}. \\ \le &\frac{1}{2}\hat{\mathbb{E}}\sup_{0 \le s\le t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}+pC\bar{\sigma }^{2} \varepsilon \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2} \bigl\vert \bar{g}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2}\,ds. \end{aligned}$$

By (3.9) with \(\epsilon =1\), we have

$$\begin{aligned} I_{2} \le &\frac{1}{2}\hat{\mathbb{E}}\sup_{0 \le s\le t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}+(p-2)C\bar{\sigma }^{2}\sqrt{\varepsilon } \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\,ds+ 2C \bar{\sigma }^{2}\sqrt{\varepsilon } \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert \bar{g} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{p}\,ds \\ \le &\frac{1}{2}\hat{\mathbb{E}}\sup_{0 \le s\le t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}+(p-2)C\bar{\sigma }^{2} \sqrt{\varepsilon } \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\,ds \\ & {} + 2C\bar{\sigma }^{2}\sqrt{\varepsilon }\hat{\mathbb{E}} \int _{0}^{t_{1}}\bigl[2 \bigl\vert \bar{g} \bigl(y_{\varepsilon }(s)\bigr)-\bar{g}(0) \bigr\vert ^{2}+2 \bigl\vert \bar{g}(0) \bigr\vert ^{2}\bigr]^{ \frac{p}{2}}\,ds. \end{aligned}$$

Using the basic inequality \(\vert a+b\vert ^{\frac{p}{2}}\le 2^{\frac{p}{2}-1}(\vert a\vert ^{\frac{p}{2}}+\vert b\vert ^{ \frac{p}{2}})\) and Assumptions 3.13.2, we get

$$\begin{aligned} I_{2} \le &\frac{1}{2}\hat{\mathbb{E}}\sup _{0 \le s\le t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}+(p-2)C\bar{\sigma }^{2}\sqrt{\varepsilon } \hat{ \mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\,ds \\ &{}+ 2C\bar{\sigma }^{2}\sqrt{\varepsilon }\hat{\mathbb{E}} \int _{0}^{t_{1}}\bigl[2L^{2} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{2}k_{2}\bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr)+2K^{2}\bigr]^{\frac{p}{2}}\,ds \\ \le &\frac{1}{2}\hat{\mathbb{E}}\sup_{0 \le s\le t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}+(p-2)C\bar{\sigma }^{2} \sqrt{\varepsilon } \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\,ds \\ &{}+ 2^{p}C\bar{\sigma }^{2}L^{p}\sqrt{ \varepsilon }\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}k_{2}^{\frac{p}{2}}\bigl( \bigl\vert y_{ \varepsilon }(s) \bigr\vert \bigr)\,ds+ 2^{p}C\bar{\sigma }^{2}\sqrt{\varepsilon }K^{p}T. \end{aligned}$$
(3.14)

Similarly, we have

$$\begin{aligned} I_{3} \le &\bar{\sigma }^{2}\hat{\mathbb{E}} \int _{0}^{t_{1}} \biggl(p \sqrt{\varepsilon } \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-1} \bigl\vert \bar{h} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert + \frac{p(p-1)}{2}\varepsilon \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p-2} \bigl\vert \bar{g} \bigl(y_{ \varepsilon }(s)\bigr) \bigr\vert ^{2} \biggr)\,ds \\ \le &\biggl[(p-1)\sqrt{\varepsilon }+\frac{(p-1)(p-2)}{2}\varepsilon \biggr] \bar{\sigma }^{2}\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\,ds \\ &{}+ 2^{p-1}\bar{\sigma }^{2}L^{p}\sqrt{ \varepsilon }\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}k_{1}^{p}\bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr)\,ds \\ &{}+(p-1)\varepsilon 2^{p-1}\bar{\sigma }^{2}L^{p} \hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}k_{2}^{\frac{p}{2}}\bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr)\,ds \\ &{}+\bigl[ \sqrt{\varepsilon }+(p-1) \varepsilon \bigr]2^{p-1}K^{p}T\bar{\sigma }^{2}. \end{aligned}$$
(3.15)

Inserting (3.13)–(3.15) into (3.12) yields

$$\begin{aligned} \hat{\mathbb{E}}\sup_{{0} \le t\le t_{1}} \bigl\vert y_{\varepsilon }(t) \bigr\vert ^{p} \le &C_{1}+C_{2}\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\,ds+ 2^{p}\bar{\sigma }^{2}L^{p} \varepsilon _{0}\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}k_{1}^{p}\bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr)\,ds \\ &{}+2^{p}(C+p)\bar{\sigma }^{2}L^{p}\varepsilon _{0}\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}k_{2}^{\frac{p}{2}}\bigl( \bigl\vert y_{ \varepsilon }(s) \bigr\vert \bigr)\,ds, \end{aligned}$$
(3.16)

where \(C_{1}=2\vert x_{0}\vert ^{p}+(C+p)2^{p}K^{p}T\bar{\sigma }^{2}\varepsilon _{0}\), \(C_{2}=2p(1+C+p)\bar{\sigma }^{2}\varepsilon _{0}\). By condition (3.4), we can find a \(\eta \in (0,e^{-1})\) such that

$$ \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}k_{1}^{p} \bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr)\le \rho ^{p}_{ \eta }\bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr) \quad \text{and}\quad \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}k_{2}^{ \frac{p}{2}}\bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr)\le \rho ^{\frac{p}{2}}_{\eta } \bigl( \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{2}\bigr), $$

where \(\rho _{\eta }: R_{+}\to R_{+}\) is a concave function given by

$$\begin{aligned} \rho _{\eta }(x):= \textstyle\begin{cases} x\log x^{-1} , & x\le \eta , \\ \eta \log \eta ^{-1}+(\log \eta ^{-1}-1)(x-\eta ), & x>\eta . \end{cases}\displaystyle \end{aligned}$$
(3.17)

Hence, we have

$$\begin{aligned} \hat{\mathbb{E}}\sup_{{0} \le t\le t_{1}} \bigl\vert y_{\varepsilon }(t) \bigr\vert ^{p} \le &C_{1}+C_{2}\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\,ds+ 2^{p}\bar{\sigma }^{2}L^{p} \varepsilon _{0}\hat{\mathbb{E}} \int _{0}^{t_{1}} \rho ^{p}_{\eta } \bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert \bigr)\,ds \\ & {} +2^{p}(C+p)\bar{\sigma }^{2}L^{p}\varepsilon _{0}\hat{\mathbb{E}} \int _{0}^{t_{1}}\rho ^{\frac{p}{2}}_{\eta } \bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{2}\bigr)\,ds \\ \le &C_{1}+C_{2}\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\,ds+ 2^{p}\bar{\sigma }^{2}L^{p} \varepsilon _{0}\hat{\mathbb{E}} \int _{0}^{t_{1}} \rho ^{p}_{\eta } \bigl(\bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p} \bigr)^{\frac{1}{p}} \bigr)\,ds \\ & {} +2^{p}(C+p)\bar{\sigma }^{2}L^{p}\varepsilon _{0}\hat{\mathbb{E}} \int _{0}^{t_{1}}\rho ^{\frac{p}{2}}_{\eta } \bigl(\bigl( \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p} \bigr)^{ \frac{2}{p}} \bigr)\,ds. \end{aligned}$$

Since \(\rho _{\eta }(\cdot )\) is a concave nondecreasing continuous function on \(R_{+}\), then by Lemma 3.7 we can conclude that \(\rho ^{p}_{\eta }(.^{\frac{1}{p}})\) and \(\rho ^{\frac{p}{2}}_{\eta }(\cdot )^{\frac{2}{p}}\) are two concave nondecreasing continuous functions on \(R_{+}\). Then, by the Jensen inequality, we obtain

$$\begin{aligned} \hat{\mathbb{E}}\sup_{{0} \le t\le t_{1}} \bigl\vert y_{\varepsilon }(t) \bigr\vert ^{p} \le &C_{1}+C_{2}\hat{\mathbb{E}} \int _{0}^{t_{1}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\,ds+ 2^{p}\bar{\sigma }^{2}L^{p} \varepsilon _{0} \int _{0}^{t_{1}}\rho ^{p}_{ \eta } \bigl(\bigl(\hat{\mathbb{E}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\bigr)^{\frac{1}{p}} \bigr)\,ds \\ & {} +2^{p}(C+p)\bar{\sigma }^{2}L^{p}\varepsilon _{0} \int _{0}^{t_{1}} \rho ^{\frac{p}{2}}_{\eta } \bigl(\bigl(\hat{\mathbb{E}} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{p}\bigr)^{ \frac{2}{p}} \bigr)\,ds. \end{aligned}$$

By the properties of the concave function, we can find some positive constants \(a_{1}\), \(b_{1}\), \(a_{2}\), \(b_{2}\) such that \(\rho ^{p}_{\eta }(x^{\frac{1}{p}})\le a_{1}+b_{1}x\), \(\rho ^{ \frac{p}{2}}_{\eta }(x)^{\frac{2}{p}}\le a_{2}+b_{2}x \). Consequently,

$$\begin{aligned} \hat{\mathbb{E}}\sup_{{0} \le t\le t_{1}} \bigl\vert y_{\varepsilon }(t) \bigr\vert ^{p} \le &C_{3}+C_{4} \int _{0}^{t_{1}}\hat{\mathbb{E}}\sup _{0\le s\le t} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{p}\,dt, \end{aligned}$$

where \(C_{3}=C_{1}+2^{p}L^{p}\varepsilon _{0}a_{1}T\bar{\sigma }^{2}+2^{p}(C+p)L^{p} \varepsilon _{0}a_{2}T\bar{\sigma }^{2}\), \(C_{4}=C_{2}+2^{p} \varepsilon _{0}b_{1}L^{p}\bar{\sigma }^{2}+2^{p}(C+p)L^{p} \varepsilon _{0}b_{2}\bar{\sigma }^{2}\). Then the Gronwall inequality gives

$$\begin{aligned} \hat{\mathbb{E}}\sup_{{0} \le t\le T} \bigl\vert y_{\varepsilon }(t) \bigr\vert ^{p} \le C_{3}e^{C_{4}T}. \end{aligned}$$

The proof is therefore complete. □

Proof of Theorem 3.5

From (3.1) and (3.6), we have

$$\begin{aligned} x_{\varepsilon }(t)-y_{\varepsilon }(t) =&\varepsilon \int _{0}^{t}\bigl[f\bigl(s,x_{ \varepsilon }(s) \bigr)-\bar{f}\bigl(y_{\varepsilon }(s)\bigr)\bigr]\,ds+\sqrt{\varepsilon } \int _{0}^{t}\bigl[h\bigl(s,x_{\varepsilon }(s) \bigr)-\bar{h}\bigl(y_{\varepsilon }(s)\bigr)\bigr]\,d \langle B\rangle _{s} \\ & {} +\sqrt{\varepsilon } \int _{0}^{t}\bigl[g\bigl(s,x_{\varepsilon }(s) \bigr)-\bar{g}\bigl(y_{ \varepsilon }(s)\bigr)\bigr]\,dB_{s}\quad \textit{q.s.} \end{aligned}$$

By the G-Itô formula [20], we have

$$\begin{aligned} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2} =& \int _{0}^{t}2 \varepsilon \bigl[x_{\varepsilon }(s)-y_{\varepsilon }(s) \bigr]^{\top }\bigl[f\bigl(s,x_{ \varepsilon }(s)\bigr)-\bar{f} \bigl(y_{\varepsilon }(s)\bigr)\bigr]\,ds \\ &{}+ \int _{0}^{t}2\sqrt{\varepsilon } \bigl[x_{\varepsilon }(s)-y_{\varepsilon }(s)\bigr]^{ \top } \bigl[g \bigl(s,x_{\varepsilon }(s)\bigr)-\bar{g}\bigl(y_{\varepsilon }(s)\bigr)\bigr] \,dB_{s} \\ &{}+ \int _{0}^{t} \bigl(2\sqrt{\varepsilon } \bigl[x_{\varepsilon }(s)-y_{ \varepsilon }(s)\bigr]^{\top }\bigl[h \bigl(s,x_{\varepsilon }(s)\bigr)-\bar{h}\bigl(y_{ \varepsilon }(s)\bigr) \bigr] \\ &{}+\varepsilon \bigl\vert g\bigl(s,x_{\varepsilon }(s)\bigr)-\bar{g} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2} \bigr)\,d\langle B \rangle _{s} \quad \textit{q.s.} \end{aligned}$$
(3.18)

Taking the expectation on both sides of (3.18), it follows that, for any \(u\in R_{+}\),

$$\begin{aligned} \hat{\mathbb{E}}\sup_{0\le t\le u} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2} \le &2 \varepsilon \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert \bigl\vert f\bigl(s,x_{\varepsilon }(s)\bigr)-\bar{f} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert \,ds \\ &{}+2\sqrt{\varepsilon }\hat{\mathbb{E}}\sup_{0\le t\le u} \int _{0}^{t}\bigl[x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr]^{\top } \bigl[g\bigl(s,x_{\varepsilon }(s)\bigr)- \bar{g} \bigl(y_{\varepsilon }(s)\bigr)\bigr]\,dB_{s} \\ &{}+\hat{\mathbb{E}}\sup_{0\le t\le u} \int _{0}^{t} \bigl(2\sqrt{ \varepsilon }\bigl[ x_{\varepsilon }(s)-y_{\varepsilon }(s)\bigr]^{\top }\bigl[h \bigl(s,x_{ \varepsilon }(s)\bigr)-\bar{h}\bigl(y_{\varepsilon }(s)\bigr)\bigr] \\ &{}+\varepsilon \bigl\vert g\bigl(s,x_{\varepsilon }(s)\bigr)-\bar{g} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2} \bigr)\,d\langle B \rangle _{s}=\sum_{i=1}^{3}Q_{i}. \end{aligned}$$
(3.19)

By Assumption 3.1 and the basic inequality \(2ab\le a^{2}+b^{2}\), we have

$$\begin{aligned} Q_{1} \le &2\varepsilon \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert \bigl\vert f\bigl(s,x_{\varepsilon }(s)\bigr)-f \bigl(s,y_{\varepsilon }(s)\bigr) \bigr\vert \,ds \\ &{}+2\varepsilon \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert \bigl\vert f\bigl(s,y_{\varepsilon }(s)\bigr)-\bar{f} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert \,ds \\ \le &2L\varepsilon \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert ^{2}k_{1}\bigl( \bigl\vert x_{\varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert \bigr)\,ds+ \varepsilon \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert ^{2}\,ds \\ &{}+\varepsilon \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert f\bigl(s,y_{\varepsilon }(s) \bigr)- \bar{f}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2}\,ds. \end{aligned}$$

Then Assumption 3.3 implies that

$$\begin{aligned} Q_{1} \le &2L\varepsilon \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}k_{1}\bigl( \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert \bigr)\,ds \\ &{}+\varepsilon \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert ^{2}\,ds +\varepsilon u \psi (u) \Bigl(1+\hat{\mathbb{E}} \sup_{0\le s\le u} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{2} \Bigr). \end{aligned}$$
(3.20)

By Lemmas 2.72.8 and the Young inequality, we get

$$\begin{aligned} Q_{2} \le &C\sqrt{\varepsilon }\hat{\mathbb{E}} \biggl[ \int _{0}^{u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2} \bigl\vert g\bigl(s,x_{\varepsilon }(s) \bigr)- \bar{g}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2}\,d \langle B\rangle _{s} \biggr]^{ \frac{1}{2}} \\ \le &C\sqrt{\varepsilon }\hat{\mathbb{E}}\biggl[\sup_{0 \le s\le u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2} \int _{0}^{u} \bigl\vert g\bigl(s,x_{ \varepsilon }(s) \bigr)-\bar{g}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2}\,d \langle B\rangle _{s}\biggr]^{ \frac{1}{2}} \\ \le &\frac{1}{2}\hat{\mathbb{E}}\sup_{0 \le s\le u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}+C\bar{\sigma }^{2}\sqrt{ \varepsilon }\hat{\mathbb{E}} \int _{0}^{u} \bigl\vert g\bigl(s,x_{\varepsilon }(s) \bigr)- \bar{g}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2}\,ds. \end{aligned}$$

Similarly, by Assumptions 3.1 and 3.3, we have

$$\begin{aligned} Q_{2} \le &\frac{1}{2}\hat{\mathbb{E}}\sup _{0 \le s\le u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}+2C\bar{\sigma }^{2}\sqrt{ \varepsilon }\hat{ \mathbb{E}} \int _{0}^{u} \bigl\vert g\bigl(s,x_{\varepsilon }(s) \bigr)-g\bigl(s,y_{ \varepsilon }(s)\bigr) \bigr\vert ^{2}\,ds \\ & {} +2C\bar{\sigma }^{2}\sqrt{\varepsilon }\hat{\mathbb{E}} \int _{0}^{u} \bigl\vert g\bigl(s,y_{ \varepsilon }(s) \bigr)-\bar{g}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2}\,ds \\ \le & \frac{1}{2}\hat{\mathbb{E}}\sup_{0 \le s\le u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}+2C \bar{\sigma }^{2}\sqrt{ \varepsilon } u \psi (u) \Bigl(1+\hat{ \mathbb{E}}\sup_{0\le s\le u} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{2} \Bigr) \\ & {} +2C\bar{\sigma }^{2}L\sqrt{\varepsilon } \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}k_{2}\bigl( \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert \bigr)\,ds. \end{aligned}$$
(3.21)

By Lemma 2.7, Assumptions 3.13.3, and the basic inequality \(2ab\le a^{2}+b^{2}\), we have

$$\begin{aligned} Q_{3} \le &\bar{\sigma }^{2}\hat{\mathbb{E}} \int _{0}^{u} \bigl(2\sqrt{ \varepsilon } \bigl\vert x_{\varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert \bigl\vert h \bigl(s,x_{ \varepsilon }(s)\bigr)-\bar{h}\bigl(y_{\varepsilon }(s)\bigr) \bigr\vert \\ & {} +\varepsilon \bigl\vert g\bigl(s,x_{\varepsilon }(s)\bigr)-\bar{g} \bigl(y_{\varepsilon }(s)\bigr) \bigr\vert ^{2} \bigr)\,ds \\ \le &2L\bar{\sigma }^{2}\sqrt{\varepsilon } \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}k_{1}\bigl( \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert \bigr)\,ds \\ & {} +\bar{\sigma }^{2}\sqrt{\varepsilon } \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}\,ds +\bar{\sigma }^{2}\sqrt{ \varepsilon } u \psi (u) \Bigl(1+\hat{\mathbb{E}}\sup_{0\le s\le u} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{2} \Bigr) \\ & {} +2L\bar{\sigma }^{2}\varepsilon \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}k_{2}\bigl( \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert \bigr)\,ds \\ & {} +2\bar{\sigma }^{2}\varepsilon u\psi (u) \Bigl(1+\hat{ \mathbb{E}}\sup_{0 \le s\le u} \bigl\vert y_{\varepsilon }(s) \bigr\vert ^{2} \Bigr). \end{aligned}$$
(3.22)

Combining with (3.19)–(3.22), we get

$$\begin{aligned}& \frac{1}{2}\hat{\mathbb{E}}\sup_{0\le t\le u} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2} \\& \quad \le \bigl(2L\varepsilon +2L\bar{\sigma }^{2}\sqrt{\varepsilon } \bigr) \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}k_{1}\bigl( \bigl\vert x_{ \varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert \bigr)\,ds \\& \quad \quad {} +\bigl(\varepsilon +\bar{\sigma }^{2}\sqrt{\varepsilon }\bigr) \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}\,ds \\& \quad \quad {} +\bigl(2C\bar{\sigma }^{2}L\sqrt{\varepsilon }+2L\bar{ \sigma }^{2} \varepsilon \bigr) \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert ^{2}k_{2}\bigl( \bigl\vert x_{\varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert \bigr)\,ds \\& \quad \quad {} +\bigl[\bigl(1+2\bar{\sigma }^{2}\bigr)\varepsilon +(1+2C)\bar{\sigma }^{2}\sqrt{ \varepsilon }\bigr]u \psi (u) \Bigl(1+ \hat{\mathbb{E}}\sup_{0\le s\le u} \bigl\vert y_{ \varepsilon }(s) \bigr\vert ^{2} \Bigr). \end{aligned}$$
(3.23)

Similarly, by condition (3.4), we can find a \(\eta \in (0,e^{-1})\) such that

$$\begin{aligned} \bigl\vert x_{\varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}k_{i}\bigl( \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert \bigr) \le \rho _{\eta }\bigl( \bigl\vert x_{\varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}\bigr), \quad i=1,2, \end{aligned}$$

where \(\rho _{\eta }(\cdot )\) is defined by (3.17). Consequently, by Lemma 3.8 and the boundedness of \(\psi (u)\), we have

$$\begin{aligned}& \hat{\mathbb{E}}\sup_{0\le t\le u} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2} \\& \quad \le 4L(1+C) \bigl(1+\bar{\sigma }^{2}\bigr) (\varepsilon + \sqrt{\varepsilon }) \hat{\mathbb{E}} \int _{0}^{u}\rho _{\eta }\bigl( \bigl\vert x_{\varepsilon }(s)-y_{ \varepsilon }(s) \bigr\vert ^{2}\bigr) \,ds \\& \quad\quad {} +2\bigl(1+\bar{\sigma }^{2}\bigr) (\varepsilon +\sqrt{ \varepsilon }) \hat{\mathbb{E}} \int _{0}^{u} \bigl\vert x_{\varepsilon }(s)-y_{\varepsilon }(s) \bigr\vert ^{2}\,ds+ 2(1+2C) \bigl(1+2\bar{\sigma }^{2} \bigr)M(\varepsilon +\sqrt{\varepsilon })u \\& \quad \le C_{5}(\varepsilon + \sqrt{\varepsilon }) \int _{0}^{u} \Bigl( \hat{\mathbb{E}}\sup _{0\le t\le s} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2}+ \rho _{\eta }\Bigl(\hat{\mathbb{E}}\sup _{0\le t\le s} \bigl\vert x_{\varepsilon }(t)-y_{ \varepsilon }(t) \bigr\vert ^{2}\Bigr) \Bigr)\,ds \\& \quad\quad{} +C_{6}(\varepsilon +\sqrt{ \varepsilon })u, \end{aligned}$$

where \(C_{5}=4L(1+C)(1+\bar{\sigma }^{2})\) and \(C_{6}=2(1+2C)(1+2\bar{\sigma }^{2})M\). Let \(\tilde{\rho }_{\eta }(x)=x+\rho _{\eta }(x)\), we obtain that

$$\begin{aligned} \hat{\mathbb{E}}\sup_{0\le t\le u} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2} \le &C_{5}(\varepsilon + \sqrt{\varepsilon }) \int _{0}^{u} \tilde{\rho }_{\eta }\Bigl( \hat{\mathbb{E}}\sup_{0\le t\le s} \bigl\vert x_{ \varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2}\Bigr)\,ds \\ & {} +C_{6}(\varepsilon +\sqrt{\varepsilon })u. \end{aligned}$$
(3.24)

Obviously, \(\tilde{\rho }_{\eta }(x)\) is a concave function on \(R_{+}\). Hence, by the properties of the concave function, we can find a pair of positive constants a and b such that

$$\begin{aligned} \tilde{\rho }_{\eta }(x)\le a+bx\quad \text{for any }x\ge 0. \end{aligned}$$

Therefore, (3.24) will become

$$\begin{aligned} \hat{\mathbb{E}}\sup_{0\le t\le u} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2} \le &C_{5}b(\varepsilon + \sqrt{ \varepsilon }) \int _{0}^{u} \hat{\mathbb{E}}\sup _{0\le t\le s} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2}\,ds \\ & {} +(C_{5}a+C_{6}) (\varepsilon +\sqrt{\varepsilon })u. \end{aligned}$$

Hence, the Gronwall inequality implies that

$$\begin{aligned} \hat{\mathbb{E}}\sup_{0\le t\le u} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2} \le &\bigl[(C_{5}a+C_{6}) ( \varepsilon +\sqrt{\varepsilon })u\bigr]e^{C_{5}b( \varepsilon + \sqrt{\varepsilon })u}. \end{aligned}$$

Choose \(\beta \in (\frac{1}{2},1)\) and \(\bar{L}>0\) such that, for every \(t\in [0,\bar{L}\varepsilon ^{\frac{1}{2}-\beta }]\subseteq R_{+}\),

$$\begin{aligned} \hat{\mathbb{E}}\sup_{0\le t\le \bar{L}\varepsilon ^{\frac{1}{2}- \beta }} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2}\le N\bar{L} \varepsilon ^{1-\beta }, \end{aligned}$$

where \(N=(C_{5}a+C_{6})(1+\sqrt{\varepsilon _{0}})e^{C_{5}b\bar{L}( \varepsilon _{0}^{\frac{3}{2}-\beta }+\varepsilon _{0}^{1-\beta })}\). Consequently, given any number \(\delta _{1}>0\), we can choose \(\varepsilon _{1}\in (0,\varepsilon _{0}]\) such that, for each \(\varepsilon \in (0,\varepsilon _{1}]\) and for every \(t\in [0,\bar{L}\varepsilon ^{\frac{1}{2}-\beta }]\),

$$\begin{aligned} \hat{\mathbb{E}} \Bigl(\sup_{t\in [0,\bar{L}\varepsilon ^{\frac{1}{2}- \beta }]} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2} \Bigr)\le \delta _{1}. \end{aligned}$$
(3.25)

This completes the proof. □

With Theorem 3.5, we can show the convergence in capacity between the processes \(x_{\varepsilon }(t)\) and \(y_{\varepsilon }(t)\).

Theorem 3.9

Let Assumptions 3.13.3hold. For a given arbitrary small number \(\delta _{2}>0\) and arbitrary constants \(\bar{L}>0\), \(\beta \in (\frac{1}{2},1)\), there exists a number \(\varepsilon _{1}\in [0,\varepsilon _{0}]\) such that, for all \(\varepsilon \in (0,\varepsilon _{1}]\),

$$\begin{aligned} \lim_{\varepsilon \to 0}\hat{\mathbb{C}}\Bigl(\sup_{0< t\le \bar{L} \varepsilon ^{\frac{1}{2}-\beta }} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert > \delta _{2}\Bigr)=0. \end{aligned}$$

Proof

By Lemma 2.10, for any given number \(\delta _{2}>0\), we can obtain that

$$ \hat{\mathbb{C}}\Bigl(\sup_{0< t\le \bar{L}\varepsilon ^{\frac{1}{2}-\beta }} \bigl\vert x_{ \varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert >\delta _{2}\Bigr) \le \frac{1}{\delta ^{2}_{2}}\hat{\mathbb{E}}\Bigl(\sup_{0< t\le \bar{L} \varepsilon ^{\frac{1}{2}-\beta }} \bigl\vert x_{\varepsilon }(t)-y_{\varepsilon }(t) \bigr\vert ^{2}\Bigr) \le \frac{N\bar{L}\varepsilon ^{1-\beta }}{\delta _{2}^{2}}. $$

Let \(\varepsilon \to 0\) and the required result follows. The proof is therefore complete. □

Remark 3.10

Let \(\underline{\sigma }^{2}=\hat{\mathbb{E}}(-X^{2})\), if \(\underline{\sigma }^{2}=\bar{\sigma }^{2}\), then Eq. (1.1) will become SDEs which have been studied by [8, 11, 14, 24]. Under our assumptions, we can obtain the convergence of the averaged solution and the standard one to G-SDEs. Hence, the corresponding results in [8, 11, 14, 24] are generalized and improved.

Examples

In this section, we construct two examples to illustrate our theory.

Remark 4.1

If we let \(k_{1}(\cdot )=k_{2}(\cdot )=1\), then Assumption 3.1 will reduce to the Lipschitz condition: For any \(x,y\in R^{n}\) and \(t\ge 0\), there exists a positive constant L such that

$$\begin{aligned} \bigl\vert f(t,x)-f(t,y) \bigr\vert + \bigl\vert h(t,x)-h(t,y) \bigr\vert \le L \vert x-y \vert \end{aligned}$$

and

$$\begin{aligned} \bigl\vert g(t,x)-g(t,y) \bigr\vert ^{2} \le L \vert x-y \vert ^{2}. \end{aligned}$$

Clearly, under the Lipschitz condition and Assumptions 3.23.3, we can conclude that Theorems 3.5 and 3.9 also hold.

Example 4.2

Consider the linear SDEs driven by G-Brownian motion

$$\begin{aligned} dx_{\varepsilon }(t)= \varepsilon \sin t x_{\varepsilon }(t)\,dt+\sqrt{ \varepsilon }\sin t x_{\varepsilon }(t)\,d\langle B\rangle _{t}+2\sqrt{ \varepsilon }\sin ^{2}(t)x_{\varepsilon }(t) \,dB_{t}, \end{aligned}$$
(4.1)

with the initial data \(x_{\varepsilon }(0)=x_{0}\). Here,

$$ f\bigl(t,x_{\varepsilon }(t)\bigr)=\varepsilon \sin tx_{\varepsilon }(t),\qquad h\bigl(t,x_{ \varepsilon }(t)\bigr)=\sqrt{\varepsilon }\sin tx_{\varepsilon }(t), \qquad g\bigl(t,x_{ \varepsilon }(t)\bigr)=2\sqrt{\varepsilon }\sin ^{2}tx_{\varepsilon }(t). $$

Let

$$ \bar{f}\bigl(y_{\varepsilon }(t)\bigr)=\frac{1}{\pi } \int _{0}^{\pi }f\bigl(t,y_{ \varepsilon }(t)\bigr) \,dt=\varepsilon y_{\varepsilon }(t),\qquad \bar{h}\bigl(y_{ \varepsilon }(t) \bigr)=\frac{1}{\pi } \int _{0}^{\pi }h\bigl(t,y_{\varepsilon }(t)\bigr) \,dt= \sqrt{\varepsilon }y_{\varepsilon }(t) $$

and

$$ \bar{g}\bigl(y_{\varepsilon }(t)\bigr)= \frac{1}{\pi } \int _{0}^{\pi }g\bigl(t,y_{ \varepsilon }(t)\bigr) \,dt=\sqrt{\varepsilon }y_{\varepsilon }(t), $$

we obtain the corresponding averaged equation as follows:

$$\begin{aligned} dy_{\varepsilon }(t)= \varepsilon y_{\varepsilon }(t) \,dt+\sqrt{ \varepsilon }y_{\varepsilon }(t)\,d\langle B\rangle _{t}+ \sqrt{ \varepsilon }y_{\varepsilon }(t)\,dB_{t}. \end{aligned}$$
(4.2)

Obviously, Eq. (4.2) is also a linear G-SDEs and its solution can be given by

$$\begin{aligned} y_{\varepsilon }(t) =&x_{0} \exp \biggl( \varepsilon t+\biggl(\sqrt{\varepsilon }- \frac{1}{2}\varepsilon \biggr) \langle B\rangle _{t}+\sqrt{\varepsilon }B_{t} \biggr). \end{aligned}$$
(4.3)

By Theorems 3.5 and 3.9, we can obtain that the solution of averaged G-SDEs (4.3) will converge to that of the standard one (4.1) in the sense of mean square and in capacity.

Example 4.3

Consider the standard form of SDEs driven by G-Brownian motion

$$\begin{aligned} dx_{\varepsilon }(t)=\varepsilon \cos ^{2}t \sum_{k\ge 1} \frac{\sin (kx_{\varepsilon }(t))}{k^{2}}\,dt+\sqrt{\varepsilon } \sum_{k \ge 1}\frac{\sin (kx_{\varepsilon }(t))}{\sqrt{k^{3}}}\,dB_{t}, \end{aligned}$$
(4.4)

with the initial data \(x_{\varepsilon }(0)=x_{0}\). Here,

$$ f(t,x)=\cos ^{2}t\sum_{k\ge 1} \frac{\sin (kx)}{k^{2}},\qquad h(t,x)=0, \quad \text{and}\quad g(t,x)=\sum _{k\ge 1}\frac{\sin (kx)}{\sqrt{k^{3}}}. $$

Let

$$ \bar{f}\bigl(y_{\varepsilon }(t)\bigr)=\frac{1}{\pi } \int _{0}^{\pi }f\bigl(t,y_{ \varepsilon }(t)\bigr) \,dt=\frac{1}{2}\sum_{k\ge 1} \frac{\sin (ky_{\varepsilon }(t))}{k^{2}} $$

and

$$ \bar{g}\bigl(y_{\varepsilon }(t)\bigr)= \frac{1}{\pi } \int _{0}^{\pi }g\bigl(t,y_{ \varepsilon }(t)\bigr) \,dt=\sum_{k\ge 1} \frac{\sin (ky_{\varepsilon }(t))}{\sqrt{k^{3}}}, $$

we have the corresponding averaged equation

$$\begin{aligned} dy_{\varepsilon }(t)= \frac{\varepsilon }{2} \sum _{k\ge 1} \frac{\sin (ky_{\varepsilon }(t))}{k^{2}}\,dt+\sqrt{\varepsilon }\sum _{k \ge 1}\frac{\sin (ky_{\varepsilon }(t))}{\sqrt{k^{3}}}\,dB_{t}. \end{aligned}$$
(4.5)

In fact, Eq. (4.5) is not a linear stochastic equation, and we cannot obtain its analytic solution. By [1], we have

$$\begin{aligned} \bigl\vert f(t,x)-f(t,y) \bigr\vert \le & \sum _{k\ge 1} \frac{ \vert \sin (kx)-\sin (ky) \vert }{k^{2}} \\ \le & 2\sum_{k\ge 1}\frac{ \vert \sin \frac{kx-ky}{2} \vert }{k^{2}}\le L \vert x-y \vert \bar{\rho }\bigl( \vert x-y \vert \bigr) \end{aligned}$$

and

$$\begin{aligned} \bigl\vert g(t,x)-g(t,y) \bigr\vert ^{2} \le & \sum _{k\ge 1} \frac{ \vert \sin (kx)-\sin (ky) \vert ^{2}}{k^{3}} \\ \le & 4\sum_{k\ge 1}\frac{ \vert \sin \frac{kx-ky}{2} \vert ^{2}}{k^{3}}\le L \vert x-y \vert ^{2} \bar{\rho }\bigl( \vert x-y \vert \bigr), \end{aligned}$$

where

$$\begin{aligned} \bar{\rho }(x):= \textstyle\begin{cases} \log x^{-1} , & x\le \eta , \\ \log \eta ^{-1}-1+\frac{\eta }{x}, & x>\eta . \end{cases}\displaystyle \end{aligned}$$

It is easily found that the conditions of Theorems 3.5 and 3.9 are satisfied, then the solution of averaged G-SDEs (4.5) will converge to that of the standard one (4.4) in the sense of mean square and in capacity.

Availability of data and materials

Not applicable.

References

  1. 1.

    Airault, H., Ren, J.: Modulus of continuity of the canonic Brownian motion on the group of diffeomorphisms of the circle. J. Funct. Anal. 196, 395–426 (2002)

    MathSciNet  Google Scholar 

  2. 2.

    Bao, J., Yin, G., Yuan, C.: Two-time-scale stochastic partial differential equations driven by alpha-stable noise: averaging principles. Bernoulli 23, 645–669 (2017)

    MathSciNet  Google Scholar 

  3. 3.

    Bogoliubov, N., Mitropolsky, Y.: Asymptotic Methods in the Theory of Non-linear Oscillations. Gordon & Breach, New York (1961)

    Google Scholar 

  4. 4.

    Cao, G., He, K.: On a type of stochastic differential equations driven by countably many Brownian motions. J. Funct. Anal. 203, 262–285 (2003)

    MathSciNet  Google Scholar 

  5. 5.

    Chen, Y., Shi, Y., Sun, X.: Averaging principle for slow-fast stochastic Burgers equation driven by α-stable process. Appl. Math. Lett. 103, 106199 (2020)

    MathSciNet  Google Scholar 

  6. 6.

    Denis, L., Hu, M., Peng, S.: Function spaces and capacity related to a sublinear expectation: application to G-Brownian motion paths. Potential Anal. 34, 139–161 (2011)

    MathSciNet  Google Scholar 

  7. 7.

    Gao, F.: Pathwise properties and homomorphic flows for stochastic differential equations driven by G-Brownian motion. Stoch. Process. Appl. 119, 3356–3382 (2009)

    Google Scholar 

  8. 8.

    Golec, J., Ladde, G.: Averaging principle and systems of singularly perturbed stochastic differential equations. J. Math. Phys. 31, 1116–1123 (1990)

    MathSciNet  Google Scholar 

  9. 9.

    Hale, J.: Averaging methods for differential equations with retarded arguments with a small parameter. J. Differ. Equ. 2, 57–73 (1966)

    MathSciNet  Google Scholar 

  10. 10.

    Hu, M., Ji, S., Peng, S., et al.: Backward stochastic differential equations driven by G-Brownian motion. Stoch. Process. Appl. 124, 759–784 (2014)

    MathSciNet  Google Scholar 

  11. 11.

    Khasminskii, R.Z.: On the principle of averaging the Itô stochastic differential equations. Kibernetika 4, 260–279 (1968)

    Google Scholar 

  12. 12.

    Khasminskii, R.Z., Yin, G.: On averaging principles: an asymptotic expansion approach. SIAM J. Math. Anal. 35, 1534–1560 (2004)

    MathSciNet  Google Scholar 

  13. 13.

    Li, X., Lin, X., Lin, Y.: Lyapunov-type conditions and stochastic differential equations driven by G-Brownian motion. J. Math. Anal. Appl. 439, 235–255 (2016)

    MathSciNet  Google Scholar 

  14. 14.

    Liptser, R., Stoyanov, J.: Stochastic version of the averaging principle for diffusion type processes. Stoch. Stoch. Rep. 32, 145–163 (1990)

    MathSciNet  Google Scholar 

  15. 15.

    Liu, W., Rockner, M., Sun, X., Xie, Y.: Averaging principle for slow-fast stochastic differential equations with time dependent locally Lipschitz coefficients. J. Differ. Equ. 268, 2910–2948 (2020)

    MathSciNet  Google Scholar 

  16. 16.

    Luo, D., Zhu, Q., Luo, Z.: An averaging principle for stochastic fractional differential equations with time delays. Appl. Math. Lett. 105, 106290 (2020)

    MathSciNet  Google Scholar 

  17. 17.

    Luo, P., Wang, F.: Stochastic differential equations driven by G-Brownian motion and ordinary differential equations. Stoch. Process. Appl. 124, 3869–3885 (2014)

    MathSciNet  Google Scholar 

  18. 18.

    Mao, W., Hu, L., You, S., Mao, X.: The averaging method for multivalued SDEs with jumps and non-Lipschitz coefficients. Discrete Contin. Dyn. Syst., Ser. B 24, 4937–4954 (2019)

    MathSciNet  Google Scholar 

  19. 19.

    Mao, X.: Stochastic Differential Equations and Applications, 2nd edn. Horwood, Chichester (2007)

    Google Scholar 

  20. 20.

    Peng, S.: Nonlinear expectations and stochastic calculus under uncertainty. arXiv:1002.4546 (2010)

  21. 21.

    Qiao, H.: Euler–Maruyama approximation for SDEs with jumps and non-Lipschitz coefficients. Osaka J. Math. 51, 47–67 (2014)

    MathSciNet  Google Scholar 

  22. 22.

    Ren, Y., Jia, X., Hu, L.: Exponential stability of solutions to impulsive stochastic differential equations driven by G-Brownian motion. Discrete Contin. Dyn. Syst., Ser. B 20, 2157–2169 (2017)

    MathSciNet  Google Scholar 

  23. 23.

    Stoyanov, I., Bainov, D.: The averaging method for a class of stochastic differential equations. Ukr. Math. J. 26, 186–194 (1974)

    MathSciNet  Google Scholar 

  24. 24.

    Tan, L., Lei, D.: The averaging method for stochastic differential delay equations under non-Lipschitz conditions. Adv. Differ. Equ. 2013, 38 (2013)

    MathSciNet  Google Scholar 

  25. 25.

    Wu, F., Yin, G.: An averaging principle for two-time-scale stochastic functional differential equations. J. Differ. Equ. 269, 1037–1077 (2020)

    MathSciNet  Google Scholar 

  26. 26.

    Xu, J., Miao, Y.: \(L^{p}\) (\(p>2\))-strong convergence of an averaging principle for two-timescales jump-diffusion stochastic differential equations. Nonlinear Anal. Hybrid Syst. 18, 33–47 (2015)

    MathSciNet  Google Scholar 

  27. 27.

    Xu, J., Miao, Y., Liu, J.: Strong averaging principle for slow-fast SPDEs with Poisson random measures. Discrete Contin. Dyn. Syst., Ser. B 20, 2233–2256 (2015)

    MathSciNet  Google Scholar 

  28. 28.

    Xu, Y., Duan, J.Q., Xu, W.: An averaging principle for stochastic dynamical systems with Lévy noise. Physica D 240, 1395–1401 (2011)

    MathSciNet  Google Scholar 

  29. 29.

    Xu, Y., Pei, B., Li, Y.: Approximation properties for solutions to non-Lipschitz stochastic differential equations with Lévy noise. Math. Methods Appl. Sci. 38, 2120–2131 (2015)

    MathSciNet  Google Scholar 

  30. 30.

    Yin, W., Cao, J., Ren, Y.: Quasi-sure exponential stabilization of stochastic systems induced by G-Brownian motion with discrete time feedback control. J. Math. Anal. Appl. 474, 276–289 (2019)

    MathSciNet  Google Scholar 

  31. 31.

    Zhang, D., Chen, Z.: Exponential stability for stochastic differential equations driven by G-Brownian motion. Appl. Math. Lett. 25, 1906–1910 (2012)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editor and the referees for their valuable comments and suggestions.

Funding

This work is supported by the National Natural Science Foundation of China (Grant No. 11401261).

Author information

Affiliations

Authors

Contributions

WM and BC contributed equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Bo Chen.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Mao, W., Chen, B. & You, S. On the averaging principle for SDEs driven by G-Brownian motion with non-Lipschitz coefficients. Adv Differ Equ 2021, 71 (2021). https://doi.org/10.1186/s13662-021-03233-y

Download citation

Keywords

  • Averaging principle
  • Stochastic differential equations
  • Mean-square convergence
  • G-Brownian motion
\