Skip to main content

Theory and Modern Applications

On drift parameter estimation for mean-reversion type stochastic differential equations with discrete observations

Abstract

We are concerned with a parameter estimation for mean-reversion type stochastic differential equations (SDEs) driven by Brownian motion. The equations, involving a small dispersion parameter, are observed at discrete (regularly spaced) time instants. The least square method is utilized to derive an asymptotically consistent estimator. We then discuss the rate of convergence of the least square estimator. The new feature of our study in this paper is that, due to the mean-reversion type drift coefficient in the SDEs, we have to use the Girsanov transformation to simplify the equations, which then gives rise to the corresponding convergence of the least square estimator being with respect to a family of probability measures indexed by the dispersion parameter, while in the literature the existing results have dealt with convergence with respect to a given probability measure.

1 Introduction

Let \((\Omega,\mathcal{F},P)\) be a complete probability space endowed with a usual filtration \(\{\mathcal{F}_{t}\}_{t\geq0}\), i.e., \(\mathcal{F}_{s}\subset\mathcal{F}_{t}\subset\mathcal{F}\) for \(0\leq s\leq t\leq1\) and \(\mathcal{F}_{0}\) contains all null set of P. We are interested in the unique solution, denoted by \(X=(X_{t})_{0\leq t\leq1}\), of the following stochastic differential equations (SDEs) of mean-reversion type:

$$ dX_{t}=\bigl[r+\alpha(X_{t},t,\varepsilon) \bigr]b(X_{t},t)\,dt+\varepsilon\sigma (X_{t},t) \,dB_{t},\quad 0\leq t\leq1 $$
(1.1)

with initial value \(X_{0}=x\in\mathbb{R}\), where r is a constant (which is supposed to be unknown), \(\varepsilon\in(0,1]\) is a parameter describing the smallness of dispersion, \(\alpha:(x,t,\varepsilon)\in\mathbb{R}\times[0,1]\times (0,1]\mapsto\alpha(x,t,\varepsilon)\in\mathbb{R}\) is twice differentiable with respect to x and differentiable with respect to t, both \(b:(x,t)\in\mathbb{R}\times[0,1]\mapsto b(x,t)\in\mathbb{R}\) and \(\sigma:(x,t)\in\mathbb{R}\times[0,1]\mapsto\sigma(x,t)\in \mathbb{R}\setminus\{0\}\) are continuous with respect to t, and \((B_{t})_{0\leq t\leq1}\) is a one dimensional \(\{\mathcal {F}_{t}\}\)-Brownian motion defined on the filtered probability space \((\Omega,\mathcal{F},\mathcal{P},\{\mathcal{F}_{t}\}_{0\leq t\leq1})\).

The above mean-reversion type SDE (with \(\varepsilon=1\)) was used widely in modeling price dynamics in mathematical studies of finance and economics. One feature of such a kind of modeling is that the function α in the drift coefficient satisfies a typical nonlinear parabolic partial differential equation - the Burgers equation; see, e.g., [1] and references therein.

In our present paper, we are interested in the discrete-time system of (1.1) starting with the initial \(X_{0}=x\). That is, for any fixed \(n\in\mathbb{N}\) and for any given partition \(0=t_{0}< t_{1}<\cdots<t_{i-1}<t_{i}<\cdots<t_{n}=1\), we define \(X_{t_{i}}\), \(i=1,2,\ldots,n\), via the Euler-Maruyama numerical scheme

$$ X_{t_{k}}=x+\sum^{k}_{i=1}\bigl[r+ \alpha (X_{t_{i-1}},t_{i-1},\varepsilon)\bigr]b(X_{t_{i-1}},t_{i-1}) \Delta t_{i}+\varepsilon\sum^{k}_{i=1} \sigma (X_{t_{i-1}},t_{i-1}) (B_{t_{i}}-B_{t_{i-1}}) $$

for \(1\le k\le n\), where \(\Delta t_{i}=t_{i}-t_{i-1}\), \(i=1,\ldots, n\).

Before we proceed further, let us give a brief overview of the topic considered in the present paper. Historically, the theory of SDEs has been very well developed since the seminal work of the great Japanese mathematician Kiyosi Itô in the mid 1940s. Since then, SDEs had profound impacts on many areas in science and technology, cf. e.g. the excellent textbook [2] (and references therein). Nowadays, the theory of SDEs has played an important role in modeling uncertain and volatile systems arising in many diverse subjects such as economics and finance, biology, chemistry, ecology, physics, and so on. Fundamental issues are to estimate certain parameters (i.e., deterministic quantities) appearing in the random models by certain observations (or by experimental data). Estimating the drift parameter of SDEs is one of such important topics. In the past decades, there were many papers devoted to the drift parameter estimation for SDEs and there are two main methods; see for instance Prakasa Rao [3], Liptser and Shiryaev [4], Kutoyants [5] for the maximum likelihood estimator (MLE) method, and Dorogovcev [6], Le Breton [7], Kasonga [8] for the least square estimator (LSE) method. It turns out that both the MLE and the LSE are asymptotically equivalent and the LSE enjoys the strong consistency property under some regularity conditions. Moreover, Prakasa Rao [9] gave a study of the asymptotic distribution. Further, Shimizu and Yoshida [10] considered a multidimensional diffusion process with jumps whose jump term is driven by a compound Poisson process. They let \(\alpha(x,\theta)\) be the drift coefficient and \(b(x,\sigma)\) be the diffusion coefficient and study estimation of the parameter \(\alpha(x,\theta)=(\theta,\sigma)\). Under certain assumptions, the consistency and asymptotic normality of an estimator were shown. Shimizu [11] considered a similar case and proposed an estimating function with more complicated situation.

On the other hand, based on continuous time observations or discrete-time observations, there are correspondingly two kinds of drift parameter estimates. A parameter estimation for diffusion processes based on continuous-time observation can be found in, e.g., Kutoyants [12], Kutoyants [13], Uchida and Yoshida [14], and Yoshida [15, 16]), in particular, Uchida and Yoshida [14] considered the evaluation problem of statistical models for diffusion processes driven by a small noise. As for a parameter estimation based on discrete-time observations, Sørensen [17] gave an excellent survey of existing estimation techniques for stationary and ergodic diffusion processes observed at discrete points in time. It is more realistic and interesting to consider the parameter estimation for diffusion processes based on discrete observation, since the actual data may be obtained discretely. We begin with our study from this point of view.

Recently, the parameter estimation for mean-reversion type SDEs has received a lot of attention. Long [18] gave an investigation of the parameter estimation for discretely observations on one dimensional Ornstein-Uhlenbeck (O-U) processes driven by small Lévy noise. One assumed that the drift function \(b(x,\theta)=-\theta x\) is linear for both x and θ. Meanwhile the driving Lévy process was \(L_{t}=aB_{t}+bZ_{t}\), where a and b are known constants, \(\{B_{t},t\geq0\}\) is a standard Brownian motion and \(Z_{t}\) is a α-stable Lévy motion independent of \(\{B_{t},t\geq0\}\). In this framework, the author established the consistency and asymptotic normality for the proposed estimators. Long [19] investigated the parameter estimation problem for discrete observations driven by a small Levy noises and further gave a discussion of the case of the drift function \(b(x,\theta)=\theta b(x)\). Under some regularity conditions, the author obtained the consistency and rate of convergence of the least squares estimator when the small dispersion parameter \(\varepsilon\rightarrow0\) and \(n\rightarrow\infty\) simultaneously. In a similar framework, Ma [20] extended the results of Long [19] to the case where the driving noise was a general Lévy process. After that, by Hu and Long [21] a least square estimator for Ornstein-Uhlenbeck processes driven by α-stable motions was studied. The main focus of Hu and Long [21] is to study the strong consistency and asymptotic distributions of the least square estimator for generalized O-U processes. After obtaining a least square estimator, the authors proved the strong consistency result and obtained further the rate of convergence of the estimator and an asymptotic distribution. Hu and Long [22] extended the results of [21] to the case that the drift function \(b(x,\theta)=\alpha_{0}-\theta_{0} x\). When \(\alpha_{0}=0\), the mean-reverting α-stable motion becomes an O-U process. Under certain conditions, by using a least square method, they showed the consistency property and obtained the asymptotic distribution.

There are many applications of small noise asymptotic to mathematical finance; see, e.g., Kunitomo and Takahashi [23], Long [19], Takahashi [24], Takahashi and Yoshida [25], Uchida and Yoshida [26], and Yoshida [27]. We mention particularly Kunitomo and Takahashi [23] who proposed a new methodology for the valuation problem of financial contingent claims when the underlying asset prices follow a general class of continuous Itô processes. Furthermore, the authors gave two interesting examples of the valuation problems of average options for interest rates.

In the present paper, we consider a fairly general class of stochastic processes solving SDEs of mean-reversion type (1.1). We aim to investigate the least square estimator for the true value of r based on the (discrete) sampling data \((X_{t_{i}})^{n}_{i=1}\). Comparing to the existing studies in the literature, a major difficulty in our case is the appearance of the item \(\alpha(X_{t},t,\varepsilon)\) in the drift coefficient of (1.1). The feature of our study is the utilizing of the Girsanov transformation, which makes it possible to simplify the drift coefficient, therefore the equation, under a family of probability measures indexed by the small dispersion parameter \(\varepsilon\in(0,1]\). As such, our consideration creates a novel point: all involved convergences are with respect to the ε indexed family of probability measures which, to our knowledge, has not been considered in the literature. For the Girsanov transformation for SDEs, the reader is referred to Øksendal [2]. In our case here, we apply the Girsanov transformation to get rid of the term \(\alpha(X_{t},t,\varepsilon)\), which changes the original probability measure P to a family of (equivalent) probability measures \(Q_{\varepsilon}\). We then derive explicitly the least square estimators which also depend on ε and therefore indexed by ε for the simplified SDE. From this we can prove, under certain conditions, the convergence of the least square estimators under the family \(Q_{\varepsilon}\) to a limit which turns out to be the true value of the parameter r. We end up by studying the convergence rate and derive further the asymptotic distribution under a new equivalent probability measure.

The paper is organized as follows. In the next section, we give some preliminaries, starting with the discretization of our equation (1.1) is presented, which will be used in later derivations and proofs. In Section 3, we focus on showing the convergence of the estimators with high frequency and small dispersion simultaneously. Section 4 is devoted to establishing the rate of the relevant convergence and the associated asymptotic distribution. To finish, the last section draws a conclusion.

2 Preliminaries and auxiliary results

In this section, we start with an introduction of the SDEs of mean-reversion type and then we impose some assumptions on our SDE (1.1) to ensure the existence and uniqueness of the solution. Furthermore, we introduce the Girsanov transformation to simplify our equation (1.1). On the other hand, in order to obtain the consistency and asymptotics of our LSE \(\hat {r}_{n,\varepsilon}\), an explicit form of \(\hat{r}_{n,\varepsilon}\) is defined. In addition, we would like to present some notations and preliminaries which will be needed in the latter sections of the paper. Throughout the paper, we use notation ‘\(\rightarrow_{Q}\)’ to denote ‘convergence in probability Q’; notation ‘\(\rightarrow_{P}\)’ to denote ‘convergence in probability P’ and notation ‘⇒’ to denote ‘convergence in distribution’.

We assume the coefficients of (1.1) fulfill the following conditions throughout the paper:

  1. (C.1)

    ∃ a constant \(L>0\) such that \(|b(x,t)-b(y,t)|\leq L|x-y|\), for \(x,y\in\mathbb{R}\), \(t\in[0,1]\);

  2. (C.2)

    ∃ a constant \(L''>0\) such that, for \(x,y\in\mathbb {R}\), \(t\in[0,1]\) and \(\varepsilon\in(0,1]\),

    $$\bigl\vert \alpha(x,t,\varepsilon)b(x,t)-\alpha (y,t,\varepsilon)b(y,t)\bigr\vert \leq L''|x-y|; $$
  3. (C.3)

    ∃ a constant \(L'>0\) such that \(|\sigma(x,t)-\sigma (y,t)|\leq L'|x-y|\), for \(x,y\in\mathbb{R}\), \(t\in[0,1]\);

  4. (C.4)

    ∃ constants \(K'>0\), \(m>0\) such that \(\sigma^{-2}(x,t)\leq K'(1+|x|^{m})\), for \(x\in\mathbb{R}\), \(t\in[0,1]\).

The above conditions guarantee (see, e.g., [1]) the existence of a unique solution of (1.1) for a given initial data \(X_{0}=x\in\mathbb{R}\). The celebrated Girsanov transformation (also called the transformation of the drift) provides a very useful and efficient approach to solve (1.1). The transformation says the following.

Let \(u:\mathbb{R}\times[0,1]\to\mathbb{R}\) satisfy the following Novikov condition:

$$\mathbb{E} \biggl[\exp \biggl(\frac{1}{2} \int_{0}^{t}\bigl\vert u(X_{s},s)\bigr\vert ^{2}\,ds \biggr) \biggr] < \infty,\quad \forall t\in[0,1]. $$

Then, by the Girsanov theorem (cf. e.g. Theorem IV in Section 4.1 of [28]),

$$\exp \biggl( \int^{t}_{0}u(X_{s},s)\,dB_{s}- \frac{1}{2} \int^{t}_{0}\bigl[ u(X_{s},s) \bigr]^{2}\,ds \biggr), \quad t\in[0,1], $$

is an \(\{\mathcal{F}_{t}\}_{t\in[0,1]}\)-martingale. Furthermore, for \(t\in [0,1]0\), we define

$$Q_{t}:=\exp \biggl( \int^{t}_{0}u(X_{s},s)\,dB_{s}- \frac{1}{2} \int^{t}_{0}\bigl[u(X_{s},s) \bigr]^{2}\,ds \biggr)\cdot P $$

or equivalently, in terms of the Radon-Nikodym derivative,

$$\frac{dQ_{t}}{dP}=\exp \biggl( \int^{t}_{0}u(s,X_{s})\,dB_{s} -\frac{1}{2} \int^{t}_{0}\bigl[u(X_{s},s) \bigr]^{2}\,ds \biggr). $$

Then taking \(Q_{1}\) in particular, we see that

$$\tilde{B}_{t}:=B_{t}- \int^{t}_{0}u(s,X_{s})\,ds, \quad 0\le t \le1, $$

is an \(\{\mathcal{F}_{t}\}_{t\in[0,1]}\)-Brownian motion under the probability \(Q_{1}\). Moreover, the solution process \(X_{t}\) of (1.1) solves the following SDE:

$$ dX_{t}=\bigl\{ \bigl[\bigl[r+\alpha(X_{t},t, \varepsilon)\bigr]b(X_{t},t)+\varepsilon\sigma (X_{t},t)u(t,X_{t}) \bigr]\bigr\} \,dt +\sigma(X_{t},t)\,d\tilde{B}_{t}, \quad t \in[0,1]. $$
(2.1)

In order to get rid of the term \(\alpha(X_{t},t,\varepsilon)\) for \(\varepsilon\in(0,1]\), we can specify u such that

$$\alpha(x,t,\varepsilon)b(x,t)+\varepsilon\sigma(x,t)u(x,t)=0, $$

and hence

$$\alpha(X_{t},t,\varepsilon)b(X_{t},t)+\varepsilon \sigma(X_{t},t)u(X_{t},t)=0 $$

and our equation (2.1) becomes

$$ dX_{t}=rb(X_{t},t)\,dt+\varepsilon \sigma(X_{t},t)\, d\hat{B}^{\varepsilon}_{t}. $$
(2.2)

Thus, we set

$$u_{\varepsilon}(x,t):=\frac{\alpha(x,t,\varepsilon )b(x,t)}{\varepsilon\sigma(x,t)} $$

so that

$$ u_{\varepsilon}(X_{t},t)=\frac{\alpha(X_{t},t,\varepsilon )b(X_{t},t)}{\varepsilon\sigma(X_{t},t)}. $$
(2.3)

The Novikov condition for this \(u_{\varepsilon}\) is then the following:

$$ E \biggl[\exp \biggl(\frac{1}{2} \int^{t}_{0}\biggl\vert \frac{\alpha (X_{s},s,\varepsilon)b(X_{s},s)}{\varepsilon\sigma(X_{s},s)}\biggr\vert ^{2}\,ds \biggr) \biggr] < \infty, \quad t\in[0,1], $$
(2.4)

which is needed to put on the coefficients of the original equation (1.1). Then we define

$$ M_{t}^{\varepsilon}=\exp \biggl(- \int^{t}_{0}u_{\varepsilon}(X_{s},s) \,dB_{s}-\frac {1}{2} \int^{t}_{0}u_{\varepsilon}^{2}(X_{s},s) \,ds \biggr), \quad t\in[0,1], $$
(2.5)

where \(M^{\varepsilon}_{t}\) is an \(\{\mathcal{F}_{t}\}_{t\in [0,1]}\)-martingale. For each \(\varepsilon\in(0,1]\), let \(Q_{\varepsilon}\) be a probability measure on \(\mathcal{F}_{1}\), satisfying

$$ dQ_{\varepsilon}:=M^{\varepsilon}_{1}\, dP. $$
(2.6)

Then we define

$$ \hat{B}^{\varepsilon}_{t}:= \int^{t}_{0}u_{\varepsilon}(X_{s},s) \,ds+B_{t}, $$
(2.7)

where \(\hat{B}^{\varepsilon}_{t}\) is an \(\{\mathcal{F}_{t}\}_{t\in[0,1]}\)-Brownian motion with respect to the probability measure \(Q_{\varepsilon}\). Then we arrive at equation (2.2) for \(X_{t}\), that is,

$$dX_{t}=rb(X_{t},t)\,dt+\varepsilon\sigma(X_{t},t) \,d\hat{B}^{\varepsilon}_{t},\quad t\in[0,1], $$

which, from now, we will focus on. We denote the true value of the parameter r by \(r_{0}\) and the least square estimator of r by r̂. As mentioned before we focus on the investigation of the least square estimator for the true value \(r_{0}\) based on the (discrete) sampling data \((X_{t_{i}})^{n}_{i=1}\) obtained by the Euler-Maruyama numerical scheme for the Cauchy problem of equation (2.2) with initial \(X_{0}=x\). For simplicity, we assume that the process \(X_{t}\) is observed at regularly spaced time points \(0=t_{0}< t_{1}<\cdots<t_{i-1}<t_{i}<\cdots<t_{n}=1\) with \(\{t_{i}=\frac{i}{n}, i=0,1,2,\ldots,n\}\), where \(n\in\mathbb{N}\) is arbitrarily fixed. That is,

$$ X_{t_{i}}=x+\sum^{n}_{i=1}rb(X_{t_{i-1}},t_{i-1}) \Delta t_{i}+\varepsilon\sum^{n}_{i=1} \sigma(X_{t_{i-1}},t_{i-1}) \bigl(\hat {B}^{\varepsilon}_{t_{i}}- \hat{B}^{\varepsilon}_{t_{i-1}}\bigr). $$

Let us start with the use of the least square method to get a consistent estimator. We first discretize (2.2) as follows:

$$ X_{t_{i}}-X_{t_{i-1}}=rb(X_{t_{i-1}},t_{i-1})\Delta t_{i}+\varepsilon\sigma(X_{t_{i-1}},t_{i-1})\Delta\hat {B}^{\varepsilon}_{t_{i}}, $$

where \(\Delta t_{i}=t_{i}-t_{i-1}=\frac{1}{n}\); \(\Delta\hat{B}^{\varepsilon}_{t_{i}}=\hat{B}^{\varepsilon}_{t_{i}}-\hat {B}^{\varepsilon}_{t_{i-1}}\) is the increment of Brownian motion. Then

$$ \frac{X_{t_{i}}-X_{t_{i-1}}-rb(X_{t_{i-1}},t_{i-1})\Delta t_{i}}{\varepsilon\sigma(X_{t_{i-1}},t_{i-1})}=\Delta\hat {B}^{\varepsilon}_{t_{i}}. $$

Since \(\Delta\hat{B}^{\varepsilon}_{t_{i}}\) is a normal distribution with zero mean on the new probability space \((\Omega,\mathcal {F},Q_{\varepsilon})\), we obtain the variance of \(\Delta\hat{B}^{\varepsilon}_{t_{i}}\) and denote it by the following contrast function:

$$ \rho_{n,\varepsilon}(r):=\sum^{n}_{i=1} \biggl\vert \frac {X_{t_{i}}-X_{t_{i-1}}-rb(X_{t_{i-1}},t_{i-1})\Delta t_{i}}{\varepsilon\sigma(X_{t_{i-1}},t_{i-1})}\biggr\vert ^{2}. $$

In order to get the least square estimator \(\hat{r}_{n,\varepsilon}\), we let

$$ \frac{\partial\rho_{n,\varepsilon}(r)}{\partial r}=0, $$

from which we get the solution, denoted by \(\hat{r}_{n,\varepsilon}\), which is given as

$$ \hat{r}_{n,\varepsilon}:=\frac{\sum^{n}_{i=1} b(X_{t_{i-1}},t_{i-1})(X_{t_{i}}-X_{t_{i-1}})\sigma ^{-2}(X_{t_{i-1}},t_{i-1})}{n^{-1}\sum^{n}_{i=1}b^{2}(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})}. $$
(2.8)

With all these in hand, we now present several auxiliary results which are needed for our later considerations.

Definition 2.1

Let \(X^{0}_{t}\) be the solution of the following ordinary differential equation under the true value of the drift parameter:

$$ dX^{0}_{t}=r_{0}b \bigl(X^{0}_{t},t\bigr)\,dt, \qquad X^{0}_{0}=x_{0}, $$
(2.9)

where \(r_{0}\) be the real value of r.

The following lemma is a reformulation of Theorem 7.3 of [29] (see also [28] or [2]), which is nothing but the Burkholder-Davis-Gundy inequality in our setting.

Lemma 2.1

Let \(g\in\mathcal {L}^{2}(\mathbb{R}_{+};\mathbb{R})\). Define, for \(0\leq t\leq T\),

$$ x(t)= \int^{t}_{0}g(s)\,dB_{s} $$

and

$$ A(t)= \int^{t}_{0}\bigl\vert g(s)\bigr\vert ^{2}\,ds. $$

Then, for every \(p>0\), there exist universal positive constants \(c_{p}\), \(C_{p}\) (depending only on p), such that

$$ c_{p}E\bigl\vert A(t)\bigr\vert ^{\frac{p}{2}}\leq E \Bigl(\sup _{0\leq s\leq t}\bigl\vert x(s)\bigr\vert ^{p} \Bigr)\leq C_{p}E\bigl\vert A(t)\bigr\vert ^{\frac{p}{2}} $$

for all \(t\geq0\). In particular, one may take \(c_{p}=(p/2)^{p}\), \(C_{p}=(32/p)^{p/2}\), if \(0< p<2\); \(c_{p}=1\), \(C_{p}=4\), if \(p=2\); \(c_{p}=(2p)^{-p/2}\), \(C_{p}=[p^{p+1}/2(p-1)^{p-1}]^{p/2}\), if \(p>2\).

Let us next show the following two lemmas which will be used in Section 3 and Section 4.

Lemma 2.2

Under conditions (C.1), (C.2), (C.3), we have

$$ \bigl\vert X_{t}-X^{0}_{t}\bigr\vert \leq \varepsilon e^{L\vert r_{0}\vert t}\sup_{\delta\in[0,t]}\biggl\vert \int^{\delta}_{0}\sigma (X_{s},s)\,d \hat{B}^{\varepsilon}_{t}\biggr\vert . $$
(2.10)

Proof

We have

$$ X^{0}_{t}=x_{0}+r_{0} \int^{t}_{0}b\bigl(X^{0}_{s},s \bigr)\,ds. $$
(2.11)

From (2.2) we have

$$ X_{t_{i}}-X_{t_{i-1}}=r_{0} \int^{t_{i}}_{t_{i-1}}b(X_{s},s)\,ds+\varepsilon \int^{t_{i}} _{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}. $$
(2.12)

Together with (2.11) and (2.12), we obtain

$$ X_{t}-X^{0}_{t}=r_{0} \int^{t}_{0}\bigl(b(X_{s},s)-b \bigl(X^{0}_{s},s\bigr)\bigr)\,ds+\varepsilon \int ^{t}_{0}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}. $$
(2.13)

By condition (C.1), we get

$$ \bigl\vert X_{t}-X^{0}_{t}\bigr\vert \leq L \vert r_{0}\vert \int^{t}_{0}\bigl\vert X_{s}-X^{0}_{s} \bigr\vert \,ds+\varepsilon\biggl\vert \int^{t}_{0}\sigma (X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert . $$
(2.14)

By the Gronwall inequality, we get

$$ \bigl\vert X_{t}-X^{0}_{t}\bigr\vert \leq \varepsilon e^{L\vert r_{0}\vert t}\sup_{\delta\in[0,t]}\biggl\vert \int^{\delta}_{0}\sigma (X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert . $$

 □

Lemma 2.3

Under conditions (C.1), (C.2), (C.3), we have

$$ \sup_{0\leq t\leq1}\bigl|X_{t}-X^{0}_{t}\bigr| \rightarrow_{Q_{\varepsilon}}0\quad \textit{as } \varepsilon\rightarrow0. $$
(2.15)

Proof

By Lemma 2.2,

$$ \sup_{t\in[0,1]}\bigl\vert X_{t}-X^{0}_{t} \bigr\vert \leq\varepsilon e^{L\vert r_{0}\vert }\sup_{\delta\in[0,t]}\biggl\vert \int^{\delta}_{0}\sigma (X_{s},s)\,d \hat{B}^{\varepsilon}_{t}\biggr\vert . $$
(2.16)

Let \(\eta>0\), by Lemma 2.1, the Markov inequality, and condition (C.4), we have

$$\begin{aligned}& Q_{\varepsilon}\biggl(\varepsilon e^{L\vert r_{0}\vert }\sup_{\delta\in[0,t]} \biggl\vert \int^{\delta}_{0}\sigma (X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert >\eta\biggr) \\& \quad \leq \eta^{-1}e^{L\vert r_{0}\vert }\varepsilon E_{Q_{\varepsilon}} \biggl[\sup _{\delta\in[0,t]}\biggl\vert \int^{\delta}_{0}\sigma(X_{s},s)\,d\hat {B}^{\varepsilon}_{s}\biggr\vert \biggr] \\& \quad \leq 4\sqrt{2}\eta^{-1}e^{L\vert r_{0}\vert }\varepsilon E_{Q_{\varepsilon}} \biggl[ \biggl( \int^{\delta}_{0}\bigl\vert \sigma(X_{s},s) \bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}} \biggr]. \end{aligned}$$
(2.17)

Since \(\sigma(x,t)\) satisfies the Lipschitz condition and is continuous with respect to t, \(\sigma(x,t)\) meets the following linear growth condition:

$$ \bigl\vert \sigma(x,t)\bigr\vert ^{2}\leq K\bigl(1+\vert x \vert ^{2}\bigr), $$

where \(K>0\) is a constant. Then we have

$$\begin{aligned}& Q_{\varepsilon}\biggl(\varepsilon e^{L\vert r_{0}\vert }\sup_{\delta\in[0,t]} \biggl\vert \int^{\delta}_{0}\sigma (X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert >\eta\biggr) \\& \quad \leq4\sqrt{2} \eta^{-1}e^{L\vert r_{0}\vert }\varepsilon \biggl( \int^{\delta}_{0}K\bigl(1+E_{Q_{\varepsilon}} \vert X_{s}\vert ^{2}\bigr)\,ds \biggr)^{\frac{1}{2}}. \end{aligned}$$
(2.18)

By the Hölder inequality, the Itô isometry, and the Gronwall inequality, we get \(E_{Q_{\varepsilon}}|X_{s}|^{2}\leq C\), where \(C>0\) is a constant. Then we have

$$ Q_{\varepsilon}\biggl(\varepsilon e^{L|r_{0}|}\sup_{\delta\in[0,t]}\biggl| \int^{\delta}_{0}\sigma (X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr|>\eta\biggr) \leq4\sqrt{2} \eta^{-1}e^{L|r_{0}|}\varepsilon \bigl[K(1+C)\delta \bigr]^{\frac{1}{2}}. $$
(2.19)

The above equation implies that

$$ \sup_{0\leq t\leq1}\bigl\vert X_{t}-X^{0}_{t} \bigr\vert \rightarrow_{Q_{\varepsilon}}0\quad \mbox{as } \varepsilon\rightarrow0. $$

 □

3 Consistency of the least square estimator

This section is devoted to prove the consistency of the least square estimator \(\hat{r}_{n,\varepsilon}\). In the following, we first derive an explicit decomposition for \(\hat{r}_{n,\varepsilon}\). Based on this, we will show the strong consistency of the LSE \(\hat{r}_{n,\varepsilon}\). Note from (2.2), that we have

$$ X_{t_{i}}-X_{t_{i-1}}=r_{0} \int^{t_{i}}_{t_{i-1}}b(X_{s},s)\,ds+\varepsilon \int^{t_{i}} _{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}. $$
(3.1)

Combining the above with (2.8) then yields the following decomposition:

$$\begin{aligned} \hat{r}_{n,\varepsilon} =&\frac{\sum^{n}_{i=1} b(X_{t_{i-1}},t_{i-1})(X_{t_{i}}-X_{t_{i-1}})\sigma ^{-2}(X_{t_{i-1}},t_{i-1})}{n^{-1}\sum^{n}_{i=1}b^{2}(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})} \\ =&\frac{r_{0}\sum^{n}_{i=1} b(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})\int ^{t_{i}}_{t_{i-1}}b(X_{s},s)\,ds}{n^{-1}\sum^{n}_{i=1}b^{2}(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})} \\ &{}+ \frac{\varepsilon\sum^{n}_{i=1} b(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})\int^{t_{i}} _{t_{i-1}}\sigma(X_{s},s)\,d\hat{B}_{s}}{n^{-1}\sum^{n}_{i=1}b^{2}(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})} \\ =&r_{0}+\frac{r_{0}\sum^{n}_{i=1} b(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})\int ^{t_{i}}_{t_{i-1}}(b(X_{s},s)-b(X_{t_{i-1}},t_{i-1}))\,ds}{n^{-1}\sum^{n}_{i=1}b^{2}(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})} \\ &{} +\frac{\varepsilon\sum^{n}_{i=1} b(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})\int^{t_{i}} _{t_{i-1}}\sigma(X_{s},s)\,d\hat{B}_{s}}{n^{-1}\sum^{n}_{i=1}b^{2}(X_{t_{i-1}},t_{i-1})\sigma^{-2}(X_{t_{i-1}},t_{i-1})} \\ =:& r_{0}+\frac{\phi_{2}(n,\varepsilon)}{\phi_{1}(n,\varepsilon)}+\frac {\phi_{3}(n,\varepsilon)}{\phi_{1}(n,\varepsilon)}. \end{aligned}$$

Our first main result is the following theorem.

Theorem 3.1

We have \(\hat{r}_{n,\varepsilon}\rightarrow_{Q_{\varepsilon}} r_{0}\), as \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).

To prove Theorem 3.1, the following lemmas are needed. We shall study the asymptotic behavior of \(\phi_{1}(n,\varepsilon)\), \(\phi_{2}(n,\varepsilon)\), and \(\phi _{2}(n,\varepsilon)\), respectively.

Lemma 3.1

Under conditions (C.1)-(C.4), we have

$$\phi_{1}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}} \int^{t}_{0}\sigma ^{-2} \bigl(X^{0}_{t},t\bigr)b^{2}\bigl(X_{t}^{0},t \bigr)\,dt $$

as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\).

Proof

We first have

$$\begin{aligned} \phi_{1}(n,\varepsilon) =&n^{-1}\sum ^{n}_{i=1}b^{2}(X_{t_{i-1}},t_{i-1}) \sigma^{-2}(X_{t_{i-1}},t_{i-1}) \\ =&n^{-1}\sum^{n}_{i=1}b^{2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \\ &{} +n^{-1}\sum^{n}_{i=1} \bigl(b^{2}(X_{t_{i-1}},t_{i-1})-b^{2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr)\sigma ^{-2}(X_{t_{i-1}},t_{i-1}) \\ &{} +n^{-1}\sum^{n}_{i=1}b^{2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl(\sigma ^{-2}(X_{t_{i-1}},t_{i-1})-\sigma^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr) \\ =:& \phi_{1,1}(n,\varepsilon)+\phi_{1,2}(n,\varepsilon)+\phi _{1,3}(n,\varepsilon). \end{aligned}$$

For \(\phi_{1,1}(n,\varepsilon)\), according to the definition of Riemann integral, we obtain, as \(n\rightarrow\infty\),

$$\phi_{1,1}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}} \int ^{1}_{0}b^{2}\bigl(X^{0}_{s},s \bigr)\sigma^{-2}\bigl(X^{0}_{s},s\bigr)\,ds. $$

For \(\phi_{1,2}(n,\varepsilon)\), we have the following derivation:

$$\begin{aligned} \bigl\vert \phi_{1,2}(n,\varepsilon)\bigr\vert \leq& n^{-1}\sum^{n}_{i=1} \bigl\vert b^{2}(X_{t_{i-1}},t_{i-1})-b^{2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \sigma ^{-2}(X_{t_{i-1}},t_{i-1}) \\ \leq& n^{-1}\sum^{n}_{i=1}K' \bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert b(X_{t_{i-1}},t_{i-1}) \\ & {}-b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl\vert b(X_{t_{i-1}},t_{i-1})+b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \\ \leq& n^{-1}\sum^{n}_{i=1}K' \bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr) \bigl(\bigl\vert b(X_{t_{i-1}},t_{i-1})-b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert ^{2} \\ & {}+2\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \bigl\vert b(X_{t_{i-1}},t_{i-1})-b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigr) \\ \leq& n^{-1}K'\sum^{n}_{i=1} \bigl\vert b(X_{t_{i-1}},t_{i-1})-b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert ^{2} \\ & {}+2n^{-1}K'\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl\vert b(X_{t_{i-1}},t_{i-1})-b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \\ & {}+n^{-1}K'\sum^{n}_{i=1} \vert X_{t_{i-1}}\vert ^{m}\bigl\vert b(X_{t_{i-1}},t_{i-1})-b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert ^{2} \\ & {}+2n^{-1}K'\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \vert X_{t_{i-1}}\vert ^{m}\bigl\vert b(X_{t_{i-1}},t_{i-1})-b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \\ :=& \phi_{1,2}^{1}(n,\varepsilon)+\phi_{1,2}^{2}(n, \varepsilon)+\phi _{1,2}^{3}(n,\varepsilon)+ \phi_{1,2}^{4}(n,\varepsilon). \end{aligned}$$

By condition (C.1), we have

$$\phi_{1,2}^{1}(n,\varepsilon)\leq n^{-1}L^{2}K' \sum^{n}_{i=1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{2} \leq L^{2}K' \Bigl(\sup_{0\leq t\leq1}\bigl\vert X_{t}-X^{0}_{t}\bigr\vert \Bigr)^{2}. $$

By Lemma 2.3, we get \(\phi_{1,2}^{1}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(\varepsilon\rightarrow0\). Next, we have

$$\begin{aligned} \phi_{1,2}^{2}(n,\varepsilon) \leq&2n^{-1}LK' \sum^{n}_{i=1}\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \\ \leq&2LK'\sup_{0\leq t\leq1}\bigl\vert X_{t}-X^{0}_{t}\bigr\vert \int^{1}_{0}\bigl\vert b\bigl(X_{t}^{0},t \bigr)\bigr\vert \,dt. \end{aligned}$$

By Lemma 2.3, we have \(\phi_{1,2}^{2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). For \(\phi _{1,2}^{3}(n,\varepsilon)\), since

$$ \vert X_{t_{i-1}}\vert ^{m}=\bigl(\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert +\bigl\vert X^{0}_{t_{i-1}}\bigr\vert \bigr)^{m} \leq2^{m}\bigl(\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert ^{m}+\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr), $$
(3.2)

where \(m\geq1\). By condition (C.1), we have

$$\phi_{1,2}^{3}(n,\varepsilon)\leq n^{-1}K'L^{2}2^{m} \sum^{n}_{i=1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m+2} +n^{-1}K'L^{2}2^{m}\sum ^{n}_{i=1}\bigl\vert X^{0}_{t_{i-1}} \bigr\vert ^{m}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert ^{2}. $$

By Lemma 2.3, we obtain \(\phi_{1,2}^{3}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). And by the same argument, we get \(\phi_{1,2}^{4}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). Thus we get \(\phi_{1,2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\). For \(\phi_{1,3}(n,\varepsilon)\), by conditions (C.1), (C.4), and (3.2), we have

$$\begin{aligned} \bigl\vert \phi_{1,3}(n,\varepsilon)\bigr\vert \leq& n^{-1}\sum^{n}_{i=1} \bigl\vert \sigma^{-2}(X_{t_{i-1}},t_{i-1})-\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert b^{2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \\ \leq& n^{-1}\sum^{n}_{i=1} \sigma^{-2}(X_{t_{i-1}},t_{i-1})\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \\ &{}\times\bigl\vert \sigma^{2}(X_{t_{i-1}},t_{i-1})-\sigma ^{2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert b^{2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \\ \leq& n^{-1}2K'LK\sum^{n}_{i=1} \bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)b^{2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \\ \leq& n^{-1}2K'LK\sum^{n}_{i=1} \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)b^{2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \\ & {} +n^{-1}2K'LK2^{m}\sum ^{n}_{i=1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert ^{m+1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)b^{2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \\ & {} +n^{-1}2K'LK2^{m}\sum ^{n}_{i=1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert \bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)b^{2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \\ \leq&2K'LK \Bigl(\sup_{0\leq t\leq1}\bigl\vert X_{t}-X^{0}_{t}\bigr\vert +2^{m}\sup _{0\leq t\leq1}\bigl\vert X_{t}-X^{0}_{t} \bigr\vert ^{m+1} \Bigr)n^{-1} \\ &{}\times\sum ^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)b^{2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \\ & {} +2K'LK2^{m}\sup_{0\leq t\leq1}\bigl\vert X_{t}-X^{0}_{t}\bigr\vert n^{-1} \sum^{n}_{i=1}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)b^{2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr). \end{aligned}$$

We then get \(\phi_{1,3}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). □

Next, we turn to the study of the asymptotic behavior of \(\phi _{2}(n,\varepsilon)\).

Lemma 3.2

For both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\),

$$\phi_{2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0. $$

Proof

From (2.12), we first show

$$\begin{aligned}& \vert X_{t}-X_{t_{i-1}}\vert \\& \quad \leq \vert r_{0}\vert \int ^{t}_{t_{i-1}}\bigl(\bigl\vert b(X_{s},s)-b(X_{t_{i-1}},t_{i-1})\bigr\vert +\bigl\vert b(X_{t_{i-1}},t_{i-1})\bigr\vert \bigr)\,ds+\varepsilon \biggl\vert \int^{t} _{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\& \quad \leq \vert r_{0}\vert L \int ^{t}_{t_{i-1}}\vert X_{s}-X_{t_{i-1}} \vert \,ds+n^{-1}\vert r_{0}\vert \bigl\vert b(X_{t_{i-1}},t_{i-1})\bigr\vert +\varepsilon \sup _{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert . \end{aligned}$$

By Gronwall’s inequality, we obtain

$$ \vert X_{t}-X_{t_{i-1}}\vert \leq e^{r_{0}L(t-t_{i-1})} \biggl[n^{-1}\vert r_{0}\vert \bigl\vert b(X_{t_{i-1}},t_{i-1})\bigr\vert +\varepsilon\sup _{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s} \biggr\vert \biggr]. $$

This then yields

$$ \sup_{t_{i-1}\leq t\leq t_{i}}\vert X_{t}-X_{t_{i-1}}\vert \leq e^{r_{0}Ln^{-1}} \biggl[n^{-1}\vert r_{0}\vert \bigl\vert b(X_{t_{i-1}},t_{i-1})\bigr\vert +\varepsilon \sup _{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s} \biggr\vert \biggr]. $$

By conditions (C.1) and (C.4), we have

$$\begin{aligned} \phi_{2}(n,\varepsilon) \leq&\vert r_{0}\vert \sum^{n}_{i=1}K' \bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert b(X_{t_{i-1}},t_{i-1})\bigr\vert \\ &{}\times\biggl\vert \int ^{t_{i}}_{t_{i-1}}\bigl(b(X_{s},s)-b(X_{t_{i-1}},t_{i-1}) \bigr)\,ds\biggr\vert \\ \leq& K'L\vert r_{0}\vert \sum ^{n}_{i=1}\bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert b(X_{t_{i-1}},t_{i-1})\bigr\vert \int ^{t_{i}}_{t_{i-1}}\vert X_{s}-X_{t_{i-1}} \vert \,ds \\ \leq& K'L\vert r_{0}\vert \sum ^{n}_{i=1}\bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert b(X_{t_{i-1}},t_{i-1})\bigr\vert n^{-1}\sup_{t_{i-1}\leq t\leq t_{i}}\vert X_{t}-X_{t_{i-1}} \vert \\ \leq& K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}}n^{-2} \sum^{n}_{i=1}\bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert b(X_{t_{i-1}},t_{i-1}) \bigr\vert ^{2} \\ &{} +K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}}n^{-1} \varepsilon\sum^{n}_{i=1}\bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert b(X_{t_{i-1}},t_{i-1}) \bigr\vert \\ &{}\times\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ =:& \phi_{2,1}(n,\varepsilon)+\phi_{2,2}(n,\varepsilon). \end{aligned}$$
(3.3)

For \(\phi_{2,1}(n,\varepsilon)\), by condition (C.1) and (3.2), we have

$$\begin{aligned} \phi_{2,1}(n,\varepsilon) \leq& K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}}n^{-2}\sum ^{n}_{i=1}\bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}+2^{m}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m} \bigr) \\ &{}\times \bigl(\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert +L\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert \bigr)^{2} \\ \leq& K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}}n^{-2} \sum^{n}_{i=1}\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert ^{2}\bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}} \bigr\vert ^{m}\bigr) \\ &{} +K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}}n^{-2} \sum^{n}_{i=1}\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert ^{2}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m} \\ &{} +K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}}n^{-2} \sum^{n}_{i=1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{2} \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr) \\ &{} +K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}}n^{-2} \sum^{n}_{i=1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m+2} \\ \leq& K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}}n^{-2} \sum^{n}_{i=1}\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert ^{2}\bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}} \bigr\vert ^{m}\bigr) \\ &{} +K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}} \sup_{0\leq t\leq1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert ^{m}n^{-2}\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert ^{2} \\ &{} +K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}} \sup_{0\leq t\leq1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert ^{2}n^{-2}\sum^{n}_{i=1} \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr) \\ &{}+ K'L\vert r_{0}\vert ^{2}e^{\frac{\vert r_{0}\vert L}{n}} \sup_{0\leq t\leq1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert ^{m+2}n^{-1}. \end{aligned}$$
(3.4)

By Lemma 2.3, it is clear to see that \(\phi_{2,1}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\).

As for \(\phi_{2,2}(n,\varepsilon)\), we have

$$\begin{aligned} \phi_{2,2} \leq& K'L\vert r_{0} \vert e^{\frac{\vert r_{0}\vert L}{n}}\varepsilon n^{-1}\sum ^{n}_{i=1}\bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}+2^{m}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m} \bigr) \\ &{} \times \bigl(\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert +L\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert \bigr)\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ \leq& K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}} \varepsilon n^{-1}\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}} \varepsilon n^{-1}\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m}\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}} \varepsilon n^{-1}\sum^{n}_{i=1} \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}} \varepsilon n^{-1}\sum^{n}_{i=1} \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m+1}\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ \leq& K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}} \varepsilon n^{-1}\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}} \varepsilon n^{-1}\sup_{0\leq t\leq1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m}\sum ^{n}_{i=1}\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \\ &{} \times\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}} \varepsilon n^{-1}\sup_{0\leq t\leq1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m}\sum ^{n}_{i=1}\bigl(1+2^{m} \bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr) \\ &{} \times\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}} \varepsilon n^{-1}\sup_{0\leq t\leq1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m+1}\sum ^{n}_{i=1}\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ =:& \phi^{1}_{2,2}(n,\varepsilon)+\phi^{2}_{2,2}(n, \varepsilon)+\phi ^{3}_{2,2}(n,\varepsilon)+ \phi^{4}_{2,2}(n,\varepsilon). \end{aligned}$$
(3.5)

For \(\phi^{1}_{2,2}(n,\varepsilon)\), by the Markov inequality, the Hölder inequality, the Gronwall inequality, and Lemma 2.1, for any given \(\gamma>0\), we have

$$\begin{aligned} Q_{\varepsilon}\bigl(\bigl\vert \phi^{1}_{2,2}(n, \varepsilon)\bigr\vert >\gamma\bigr) \leq&\gamma^{-1} K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}}\varepsilon n^{-1}\sum^{n}_{i=1}\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr) \\ &{}\times E_{Q_{\varepsilon}} \biggl[\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \biggr] \\ \leq&\gamma^{-1} K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}}\varepsilon n^{-1}\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr) \\ &{}\times 4\sqrt{2}E_{Q_{\varepsilon}} \biggl[ \biggl( \int ^{t_{i}}_{t_{i-1}}\bigl\vert \sigma(X_{s},s) \bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}} \biggr] \\ \leq&\gamma^{-1} K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}}\varepsilon n^{-1}\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr) \\ &{}\times 4\sqrt{2} \biggl( \int^{t_{i}}_{t_{i-1}}E_{Q_{\varepsilon}}\bigl\vert \sigma (X_{s},s)\bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}} \\ \leq&\gamma^{-1} K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}}\varepsilon n^{-1}\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr) \\ &{}\times 4\sqrt{2} \biggl( \int^{t_{i}}_{t_{i-1}}K\bigl(1+E_{Q_{\varepsilon}} \vert X_{s}\vert ^{2}\bigr)\,ds \biggr)^{\frac{1}{2}} \\ \leq&\gamma^{-1} K'L\vert r_{0}\vert e^{\frac{\vert r_{0}\vert L}{n}}\varepsilon n^{-1}\sum^{n}_{i=1} \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr) \\ &{}\times 4\sqrt{2}K^{\frac{1}{2}}(1+C)^{\frac{1}{2}}n^{-\frac{1}{2}}. \end{aligned}$$

It then implies that \(\phi^{1}_{2,2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). With similar derivations, through using Lemma 2.3, we get furthermore \(\phi ^{2}_{2,2}(n,\varepsilon)\), \(\phi^{3}_{2,2}(n,\varepsilon)\), \(\phi^{4}_{2,2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). □

Finally, let us turn to the asymptotic behavior of \(\phi _{3}(n,\varepsilon)\). We have the followingr result.

Lemma 3.3

Under conditions (C.1)-(C.4), we have \(\phi_{3}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{1/2}\rightarrow0\).

Proof

By conditions (C.1) and (C.4), and (3.2), we have

$$\begin{aligned} \bigl\vert \phi_{3}(n,\varepsilon)\bigr\vert \leq&\varepsilon\sum ^{n}_{i=1}K'\bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert b(X_{t_{i-1}},t_{i-1}) \bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s} \biggr\vert \\ \leq&\varepsilon\sum^{n}_{i=1}K' \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}+2^{m}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert ^{m}\bigr) \\ &{}\times \bigl(\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert +L\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert \bigr) \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ \leq&\varepsilon K'\sum^{n}_{i=1} \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +\varepsilon K'2^{m}\sum ^{n}_{i=1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert ^{m}\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +\varepsilon K'L\sum^{n}_{i=1} \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +\varepsilon K'2^{m}L\sum ^{n}_{i=1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert ^{m+1}\biggl\vert \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ \leq&\varepsilon K'\sum^{n}_{i=1} \bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +\varepsilon K'2^{m}\sup_{0\leq t\leq1} \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m}\sum^{n}_{i=1}\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma (X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +\varepsilon K'L\sup_{0\leq t\leq1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \sum ^{n}_{i=1}\bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma (X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ &{} +\varepsilon K'2^{m}L\sup_{0\leq t\leq1} \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m+1}\sum^{n}_{i=1}\biggl\vert \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \\ =:& \phi_{3,1}(n,\varepsilon)+\phi_{3,2}(n,\varepsilon)+\phi _{3,3}(n,\varepsilon)+\phi_{3,4}(n,\varepsilon). \end{aligned}$$

For \(\phi_{3,1}(n,\varepsilon)\), by the Markov inequality and Lemma 2.1, we have, for any given \(\gamma>0\),

$$\begin{aligned} Q_{\varepsilon}\bigl(\bigl\vert \phi_{3,1}(n,\varepsilon)\bigr\vert >\gamma\bigr) \leq&\gamma^{-1}K'\varepsilon\sum ^{n}_{i=1}\bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\ &{}\times E_{Q_{\varepsilon}} \biggl[\sup_{t_{i-1}\leq t\leq t_{i}}\biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}^{\varepsilon}_{s}\biggr\vert \biggr] \\ \leq&\gamma^{-1}K'\varepsilon\sum ^{n}_{i=1}\bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\ &{}\times4\sqrt {2} E_{Q_{\varepsilon}} \biggl[ \biggl( \int^{t_{i}}_{t_{i-1}}\bigl\vert \sigma (X_{s},s)\bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}} \biggr] \\ \leq&\gamma^{-1}K'\varepsilon\sum ^{n}_{i=1}\bigl(1+2^{m}\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m}\bigr)\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\ &{}\times 4\sqrt {2}K^{\frac{1}{2}}(1+C)^{\frac{1}{2}}n^{-\frac{1}{2}}. \end{aligned}$$

The above further implies that \(\phi_{3,1}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\) simultaneously. Here, we remark that the new condition \(\varepsilon n^{\frac{1}{2}}\rightarrow0\) has been added to have convergence. In the same manner, we can further obtain \(\phi_{3,2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\); \(\phi_{3,3}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\); \(\phi_{3,4}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\). □

Now we are in the position to prove Theorem 3.1.

Proof of Theorem 3.1

Let \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\). By using Lemma 3.1, Lemma 3.2, and Lemma 3.3, we have therefore

$$ \hat{r}_{n,\varepsilon}=r_{0}+\frac{\phi_{2}(n,r)}{\phi_{1}(n,r)}+\frac {\phi_{3}(n,r)}{\phi_{1}(n,r)} \rightarrow_{Q_{\varepsilon}}r_{0}. $$

This completes the proof. □

4 Asymptotics of the least square estimator

Our aim of this section is to study the asymptotic behavior of the least square estimator. For the sake of simplicity, we assume that \(\alpha(x,t,\varepsilon)=\varepsilon\alpha(x,t)\), so that \(Q_{\varepsilon}=Q\) is independent of ε. The main result of this section is formulated as follows.

Theorem 4.1

There exist two independent Q-random variables \(U_{1}\) and \(U_{2}\) with the standard normal distribution \(N(0,1)\) such that

$$\begin{aligned} \varepsilon^{-1}(\hat{r}_{n,\varepsilon}-r_{0}) \quad \Rightarrow_{Q}\quad &\frac { (\int^{1}_{0}|\sigma(X_{s}^{0},s)|^{-4}[(b(X^{0}_{s},s)\sigma (X_{s}^{0},s))_{+}]^{2}\,ds )^{\frac{1}{2}}U_{1}}{\int^{1}_{0}\sigma ^{-2}(X_{s}^{0},s)b^{2}(X^{0}_{s},s)\,ds} \\ &\quad {} -\frac{ (\int^{1}_{0}|\sigma (X_{s}^{0},s)|^{-4}[(b(X^{0}_{s},s)\sigma(X_{s}^{0},s))_{-}]^{2}\,ds )^{\frac {1}{2}}U_{2}}{\int^{1}_{0}\sigma^{-2}(X_{s}^{0},s)b^{2}(X^{0}_{s},s)\,ds} \end{aligned}$$

as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), \(n\varepsilon\rightarrow\infty\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\) simultaneously.

To show Theorem 4.1, by Theorem 3.1, we could rewrite \(\varepsilon ^{-1}(\hat{r}_{n,\varepsilon}-r_{0})\) as follows:

$$ \varepsilon^{-1}(\hat{r}_{n,\varepsilon}-r_{0})= \frac{\varepsilon ^{-1}\phi_{2}(n,\varepsilon)}{\phi_{1}(n,\varepsilon)}+\frac {\varepsilon^{-1}\phi_{3}(n,\varepsilon)}{\phi_{1}(n,\varepsilon)} =: \frac{\Phi_{2}(n,\varepsilon)}{\phi_{1}(n,\varepsilon)}+\frac{\Phi _{3}(n,\varepsilon)}{\phi_{1}(n,\varepsilon)}. $$

Theorem 4.1 will be proved by verifying the following lemmas, in which we shall discuss the asymptotic behaviors of \(\Phi_{i}({n,\varepsilon })\), \(i=2,3\), respectively.

Lemma 4.1

Under conditions (C.1)-(C.4), we have \(\Phi_{2}({n,\varepsilon})\rightarrow_{Q}0\) as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(n\varepsilon\rightarrow\infty\) simultaneously.

Proof

From Lemma 3.3, by (3.3), we have

$$ \bigl\vert \Phi_{2}({n,\varepsilon})\bigr\vert = \varepsilon^{-1}\bigl\vert \phi_{2}({n,\varepsilon})\bigr\vert \leq\varepsilon^{-1}\phi_{2,1}({n,\varepsilon})+ \varepsilon ^{-1}\phi_{2,2}({n,\varepsilon}) =: \Phi_{2,1}({n,\varepsilon})+\Phi_{2,2}({n,\varepsilon}). $$

By (3.4), it is easy to see that \(\Phi_{2,1}({n,\varepsilon})\rightarrow_{Q}0\) as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(n\varepsilon\rightarrow\infty\). Notice that we have the new condition \(n\varepsilon\rightarrow\infty\) for convergence. Similarly, \(\Phi_{2,2}({n,\varepsilon})\rightarrow_{Q}0\) as both \(n\rightarrow \infty\) and \(\varepsilon\rightarrow0\). This then completes the proof of Lemma 4.1. □

Next, let us turn to the consideration of the asymptotic behavior of \(\Phi_{3}(n,\varepsilon)\).

Lemma 4.2

Under conditions (C.1)-(C.4), we have

$$\begin{aligned} \Phi_{3}(n,\varepsilon) \quad \Rightarrow_{Q}\quad & \biggl( \int^{1}_{0}\bigl|\sigma \bigl(X_{s}^{0},s \bigr)\bigr|^{-4}\bigl[\bigl(b\bigl(X^{0}_{s},s\bigr) \sigma\bigl(X_{s}^{0},s\bigr)\bigr)_{+}\bigr]^{2}\,ds \biggr)^{\frac {1}{2}}U_{1} \\ &\quad {} - \biggl( \int^{1}_{0}\bigl|\sigma\bigl(X_{s}^{0},s \bigr)\bigr|^{-4}\bigl[\bigl(b\bigl(X^{0}_{s},s\bigr) \sigma \bigl(X_{s}^{0},s\bigr)\bigr)_{-}\bigr]^{2} \,ds \biggr)^{\frac{1}{2}}U_{2} \end{aligned}$$

as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).

Proof

We have

$$\begin{aligned} \Phi_{3}(n,\varepsilon) =&\sum^{n}_{i=1} \sigma ^{-2}(X_{t_{i-1}},t_{i-1})b(X_{t_{i-1}},t_{i-1}) \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s} \\ =&\sum^{n}_{i=1}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr) \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s} \\ &{} +\sum^{n}_{i=1}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr) \int ^{t_{i}}_{t_{i-1}}\bigl(\sigma(X_{s},s)- \sigma\bigl(X^{0}_{s},s\bigr)\bigr)\,d\hat{B}_{s} \\ &{} +\sum^{n}_{i=1}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigl[b(X_{t_{i-1}},t_{i-1})-b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr] \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s} \\ &{} +\sum^{n}_{i=1}\bigl[ \sigma^{-2}(X_{t_{i-1}},t_{i-1})-\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr]b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s} \\ &{} +\sum^{n}_{i=1}\bigl[ \sigma^{-2}(X_{t_{i-1}},t_{i-1})-\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr] \bigl[b(X_{t_{i-1}},t_{i-1}) \\ &{} -b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr] \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d\hat {B}_{s} \\ =:& \Phi_{3,1}(n,\varepsilon)+\Phi_{3,2}(n,\varepsilon)+\Phi _{3,3}(n,\varepsilon)+\Phi_{3,4}(n,\varepsilon)+\Phi _{3,5}(n,\varepsilon). \end{aligned}$$

Let us set \(V(s)\) (which is deterministic) as follows:

$$V(s):=\sum^{n}_{i=1}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\sigma \bigl(X_{s}^{0},s\bigr)\mathbb {1}_{(t_{i-1},t_{i}]}(s). $$

Let \(V_{+}(s)\) and \(V_{-}(s)\) be the positive and negative parts of \(V(s)\), respectively. Then by Theorem 4.1 of Kallenberg [30], there exist two independent Q-Brownian motions \(\hat{B}'\) and \(\hat{B}''\), which have the same distribution of BÌ‚, such that

$$ \Phi_{3,1}(n,\varepsilon)= \int^{1}_{0}V(s)\,d\hat{B}_{s}= \hat{B}'\circ \int^{1}_{0}V^{2}_{+}(s)\,ds- \hat{B}''\circ \int^{1}_{0}V^{2}_{-}(s)\,ds. $$

Note that

$$ V^{2}_{+}=\sum^{n}_{i=1}\bigl\vert \sigma \bigl(X_{t_{i-1}}^{0},t_{i-1}\bigr)\bigr\vert ^{-4}\bigl(\bigl(b\bigl(X_{t_{i-1}}^{0},t_{i-1} \bigr)\bigr)\sigma \bigl(X_{s}^{0},s\bigr) \bigr)^{2}_{+}\mathbb {1}_{(t_{i-1},t_{i}]}(s) $$

and

$$ V^{2}_{-}=\sum^{n}_{i=1}\bigl\vert \sigma \bigl(X_{t_{i-1}}^{0},t_{i-1}\bigr)\bigr\vert ^{-4}\bigl(\bigl(b\bigl(X_{t_{i-1}}^{0},t_{i-1} \bigr)\bigr)\sigma \bigl(X_{s}^{0},s\bigr) \bigr)^{2}_{-}\mathbb {1}_{(t_{i-1},t_{i}]}(s). $$

Then we have

$$ \int^{1}_{0}V^{2}_{+}(s)\,ds\rightarrow \int^{1}_{0}\bigl\vert \sigma \bigl(X_{s}^{0},s \bigr)\bigr\vert ^{-4}\bigl(b\bigl(X_{s}^{0},s \bigr)\sigma\bigl(X_{s}^{0},s\bigr)\bigr)^{2}_{+}\,ds $$

and

$$ \int^{1}_{0}V^{2}_{-}(s)\,ds\rightarrow \int^{1}_{0}\bigl\vert \sigma \bigl(X_{s}^{0},s \bigr)\bigr\vert ^{-4}\bigl(b\bigl(X_{s}^{0},s \bigr)\sigma\bigl(X_{s}^{0},s\bigr)\bigr)^{2}_{-}\,ds $$

as \(n\rightarrow\infty\). Then

$$ \hat{B}'\circ \int^{1}_{0}V^{2}_{+}(s)\,ds\rightarrow \hat{B}'\circ \int^{1}_{0}\bigl\vert \sigma\bigl(X_{s}^{0},s \bigr)\bigr\vert ^{-4}\bigl[\bigl(b\bigl(X_{s}^{0},s \bigr)\sigma \bigl(X_{s}^{0},s\bigr)\bigr)_{+} \bigr]^{2}\,ds $$

and

$$ \hat{B}''\circ \int^{1}_{0}V^{2}_{-}(s)\,ds\rightarrow \hat{B}''\circ \int^{1}_{0}\bigl\vert \sigma\bigl(X_{s}^{0},s \bigr)\bigr\vert ^{-4}\bigl[\bigl(b\bigl(X_{s}^{0},s \bigr)\sigma \bigl(X_{s}^{0},s\bigr)\bigr)_{-} \bigr]^{2}\,ds. $$

So we get, as \(n\rightarrow\infty\),

$$\begin{aligned} \Phi_{3,1}(n,\varepsilon) \quad \Rightarrow_{Q}\quad & \biggl( \int^{1}_{0}\bigl\vert \sigma \bigl(X_{s}^{0},s \bigr)\bigr\vert ^{-4}\bigl[\bigl(b\bigl(X^{0}_{s},s \bigr)\sigma\bigl(X_{s}^{0},s\bigr)\bigr)_{+} \bigr]^{2}\,ds \biggr)^{\frac {1}{2}}U_{1} \\ &\quad {} - \biggl( \int^{1}_{0}\bigl\vert \sigma\bigl(X_{s}^{0},s \bigr)\bigr\vert ^{-4}\bigl[\bigl(b\bigl(X^{0}_{s},s \bigr)\sigma \bigl(X_{s}^{0},s\bigr)\bigr)_{-} \bigr]^{2}\,ds \biggr)^{\frac{1}{2}}U_{2}. \end{aligned}$$

For \(\Phi_{3,2}(n,\varepsilon)\), by condition (C.3), the Markov inequality, and Lemma 2.1, Lemma 2.2, for any given \(\gamma>0\), we have

$$\begin{aligned}& Q\bigl(\bigl\vert \Phi_{3,2}(n,\varepsilon)\bigr\vert >\gamma\bigr) \\& \quad \leq\gamma^{-1}E_{Q} \Biggl[\sum ^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigr\vert \biggl\vert \int ^{t_{i}}_{t_{i-1}}\bigl(\sigma(X_{s},s)- \sigma\bigl(X^{0}_{s},s\bigr)\bigr)\,d\hat{B}_{s} \biggr\vert \Biggr] \\& \quad \leq\gamma^{-1}\sum^{n}_{i=1} \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert E_{Q} \biggl[ \biggl\vert \int^{t_{i}}_{t_{i-1}}\bigl(\sigma(X_{s},s)-\sigma \bigl(X^{0}_{s},s\bigr)\bigr)\,d\hat{B}_{s}\biggr\vert \biggr] \\& \quad \leq\gamma^{-1}\sum^{n}_{i=1} \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert 4\sqrt{2}E_{Q} \biggl[ \biggl( \int^{t_{i}}_{t_{i-1}}\bigl\vert \sigma(X_{s},s)- \sigma\bigl(X^{0}_{s},s\bigr)\bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}} \biggr] \\& \quad \leq\gamma^{-1}L4\sqrt{2}\sum^{n}_{i=1} \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert E_{Q} \biggl[ \biggl( \int^{t_{i}}_{t_{i-1}}\bigl\vert X_{s}-X^{0}_{s} \bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}} \biggr] \\& \quad \leq\gamma^{-1}L4\sqrt{2}n^{-\frac{1}{2}}\sum ^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigr\vert E_{Q} \Bigl[\sup_{t_{i-1}\leq t\leq{t_{i}}}\bigl\vert X_{s}-X^{0}_{s}\bigr\vert \Bigr] \\& \quad \leq\gamma^{-1}L4\sqrt{2}n^{-\frac{1}{2}}\sum ^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigr\vert \\& \qquad {} \times E_{Q} \biggl[\varepsilon e^{L\vert r_{0}\vert t_{i}}\sup _{t_{i-1}\leq t\leq {t_{i}}}\biggl\vert \int^{t}_{0}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \biggr] \\& \quad \leq\gamma^{-1}L32e^{L\vert r_{0}\vert }n^{-\frac{1}{2}}\varepsilon \sum^{n}_{i=1}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigl\vert b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\& \qquad {}\times E_{Q} \biggl[ \biggl( \int^{t_{i}}_{0}\bigl\vert \sigma(X_{s},s) \bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}} \biggr], \end{aligned}$$
(4.1)

which tends to zero as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).

For \(\Phi_{3,3}(n,\varepsilon)\), we have

$$\begin{aligned} \Phi_{3,3}(n,\varepsilon) =&\sum^{n}_{i=1} \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl[b(X_{t_{i-1}},t_{i-1})-b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr] \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s} \\ =&\sum^{n}_{i=1}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigl[b(X_{t_{i-1}},t_{i-1})-b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr] \int ^{t_{i}}_{t_{i-1}}\sigma\bigl(X^{0}_{s},s \bigr)\,d\hat{B}_{s} \\ &{} +\sum^{n}_{i=1}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigl[b(X_{t_{i-1}},t_{i-1})-b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr] \\ &{} \times \int^{t_{i}}_{t_{i-1}}\bigl(\sigma (X_{s},s)- \sigma\bigl(X^{0}_{s},s\bigr)\bigr)\,d\hat{B}_{s} \\ =:& \Phi^{1}_{3,3}(n,\varepsilon)+\Phi^{2}_{3,3}(n, \varepsilon). \end{aligned}$$

For \(\Phi^{1}_{3,3}(n,\varepsilon)\), by condition (C.1), we have

$$\begin{aligned} \Phi^{1}_{3,3}(n,\varepsilon) \leq&\Biggl\vert \sum ^{n}_{i=1}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigl[b(X_{t_{i-1}},t_{i-1})-b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr] \int ^{t_{i}}_{t_{i-1}}\sigma\bigl(X^{0}_{s},s \bigr)\,d\hat{B}_{s}\Biggr\vert \\ \leq& L\sum^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \biggl\vert \int ^{t_{i}}_{t_{i-1}}\sigma\bigl(X^{0}_{s},s \bigr)\,d\hat{B}_{s}\biggr\vert \\ \leq& L\sum^{n}_{i=1}\sigma^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\varepsilon e^{L\vert r_{0}\vert t_{i-1}}\biggl\vert \int^{t_{i-1}}_{0}\sigma(X_{s},s)\,d \hat{B}_{s} \biggr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma\bigl(X^{0}_{s},s \bigr)\,d\hat{B}_{s}\biggr\vert . \end{aligned}$$

By the Markov inequality and Lemma 2.1, for any given \(\gamma>0\), we get

$$\begin{aligned}& Q\bigl(\bigl\vert \Phi_{3,2}(n,\varepsilon)\bigr\vert >\gamma\bigr) \\& \quad \leq\gamma^{-1}L\varepsilon\sum^{n}_{i=1} \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)e^{L\vert r_{0}\vert t_{i-1}}E_{Q}\biggl\vert \int ^{t_{i-1}}_{0}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert E_{Q}\biggl\vert \int ^{t_{i}}_{t_{i-1}}\sigma\bigl(X^{0}_{s},s \bigr)\,d\hat{B}_{s}\biggr\vert \\& \quad \leq32\gamma^{-1}L\varepsilon\sum^{n}_{i=1} \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)e^{L\vert r_{0}\vert t_{i-1}}E_{Q} \biggl[ \biggl( \int ^{t_{i-1}}_{0}\bigl\vert \sigma(X_{s},s) \bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}} \biggr] \\& \qquad {}\times \biggl[ \biggl( \int ^{t_{i}}_{t_{i-1}}\bigl\vert \sigma \bigl(X^{0}_{s},s\bigr)\bigr\vert ^{2}\,ds \biggr)^{\frac{1}{2}} \biggr] \\& \quad \leq32e^{L\vert r_{0}\vert }n^{-\frac{1}{2}}K(1+C)\gamma^{-1}L \varepsilon\sum^{n}_{i=1} \sigma^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)t^{\frac{1}{2}}_{i-1} \\& \quad =32e^{L\vert r_{0}\vert }\varepsilon n^{\frac{1}{2}}K(1+C)\gamma^{-1}L n^{-1}\sum^{n}_{i=1} \sigma^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)t^{\frac{1}{2}}_{i-1}, \end{aligned}$$

which tends to zero as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).

For \(\Phi^{2}_{3,3}(n,\varepsilon)\), by condition (C.1), Lemma 2.3, and the same arguments as used in (4.1), we find

$$\begin{aligned} \Phi^{2}_{3,3}(n,\varepsilon) \leq&\Biggl\vert \sum ^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigl[b(X_{t_{i-1}},t_{i-1})-b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr] \\ &{}\times \int^{t_{i}}_{t_{i-1}}\bigl(\sigma (X_{s},s)- \sigma\bigl(X^{0}_{s},s\bigr)\bigr)\,d\hat{B}_{s} \Biggr\vert \\ \leq& L\sum^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \biggl\vert \int ^{t_{i}}_{t_{i-1}}\bigl(\sigma(X_{s},s)- \sigma\bigl(X^{0}_{s},s\bigr)\bigr)\,d\hat{B}_{s} \biggr\vert \\ \leq& L\sup_{0\leq t\leq1}\bigl\vert X_{t}-X^{0}_{t} \bigr\vert \sigma^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\biggl\vert \int ^{t_{i}}_{t_{i-1}}\bigl(\sigma(X_{s},s)- \sigma\bigl(X^{0}_{s},s\bigr)\bigr)\,d\hat{B}_{s} \biggr\vert , \end{aligned}$$

which converges to zero as both \(n\rightarrow\infty\) and \(\varepsilon \rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\). Hence,

$$\Phi_{3,3}(n,\varepsilon)\rightarrow_{Q}0 $$

as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).

For \(\Phi_{3,4}(n,\varepsilon)\), by conditions (C.1)-(C.4), and (3.2), we have

$$\begin{aligned} \bigl\vert \Phi_{3,4}(n,\varepsilon)\bigr\vert \leq&\sum^{n}_{i=1}\bigl\vert \sigma^{-2}(X_{t_{i-1}},t_{i-1})-\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \biggl\vert \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \\ \leq&\sum^{n}_{i=1} \sigma^{-2}(X_{t_{i-1}},t_{i-1})\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigl\vert \sigma^{2}(X_{t_{i-1}},t_{i-1})-\sigma ^{2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\ &{}\times\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \biggl\vert \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \\ \leq&\sum^{n}_{i=1}K' \bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)2K\bigl\vert \sigma(X_{t_{i-1}},t_{i-1})-\sigma \bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \\ &{} \times\bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \biggl\vert \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \\ \leq&2KK'L\sum^{n}_{i=1} \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\ &{}\times\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \\ &{} +2KK'L2^{m}\sum^{n}_{i=1} \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\ &{}\times\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m+1} \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \\ &{} +2KK'L2^{m}\sum^{n}_{i=1} \sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\ &{}\times\bigl\vert X^{0}_{t_{i-1}}\bigr\vert ^{m} \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \\ =:& \Phi^{1}_{3,4}(n,\varepsilon)+\Phi^{2}_{3,4}(n, \varepsilon)+\Phi ^{3}_{3,4}(n,\varepsilon). \end{aligned}$$

From the method of the convergence of \(\Phi_{3,3}(n,\varepsilon)\), we get \(\Phi^{1}_{3,4}(n,\varepsilon)\rightarrow_{Q}0\) and \(\Phi^{3}_{3,4}(n, \varepsilon)\rightarrow_{Q}0\) as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).

For \(\Phi^{2}_{3,4}(n,\varepsilon)\), we have

$$\begin{aligned} \Phi^{2}_{3,4}(n,\varepsilon) \leq&\sup_{0\leq t\leq1} \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{m}2KK'L2^{m}\sum ^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr) \\ &{} \times \bigl\vert b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert \biggl\vert \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert , \end{aligned}$$

which converges to zero in probability, since \(m\geq1\), \(\sup_{0\leq t\leq1}|X_{t_{i-1}}-X^{0}_{t_{i-1}}|^{m}\) converges to zero in probability as \(\varepsilon\rightarrow0\). Hence, \(\Phi_{3,4}(n,\varepsilon)\rightarrow_{Q}0\) as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).

For \(\Phi_{3,5}(n,\varepsilon)\), for some \(G>0\) we have

$$\begin{aligned} \bigl\vert \Phi_{3,5}(n,\varepsilon)\bigr\vert \leq&\sum^{n}_{i=1}\bigl\vert \sigma^{-2}(X_{t_{i-1}},t_{i-1})-\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\ &{}\times\bigl\vert b(X_{t_{i-1}},t_{i-1})-b\bigl(X^{0}_{t_{i-1}},t_{i-1} \bigr)\bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \\ \leq&\sum^{n}_{i=1} \sigma^{-2}(X_{t_{i-1}},t_{i-1})\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigl\vert \sigma^{2}(X_{t_{i-1}},t_{i-1})-\sigma ^{2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \\ &{} \times \bigl\vert b(X_{t_{i-1}},t_{i-1})-b \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr)\bigr\vert \biggl\vert \int ^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \\ \leq& G\sum^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert ^{2} \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert \\ \leq&\sup_{0\leq t\leq1}\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert G\sum^{n}_{i=1}\sigma ^{-2}\bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr) \\ &{}\times\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}}\bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s)\,d \hat{B}_{s}\biggr\vert , \end{aligned}$$

which converges to zero in probability Q, since \(\sup_{0\leq t\leq1}|X_{t_{i-1}}-X^{0}_{t_{i-1}}|\rightarrow_{Q}0\) as \(\varepsilon\rightarrow0\) by Lemma 2.3, and

$$ G\sum^{n}_{i=1}\sigma ^{-2} \bigl(X^{0}_{t_{i-1}},t_{i-1}\bigr) \bigl(1+\vert X_{t_{i-1}}\vert ^{m}\bigr)\bigl\vert X_{t_{i-1}}-X^{0}_{t_{i-1}} \bigr\vert \biggl\vert \int^{t_{i}}_{t_{i-1}}\sigma(X_{s},s) \,dB_{s}\biggr\vert \rightarrow_{Q}0 $$

as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\) by the same arguments for the convergence of \(\Phi_{3,4}(n,\varepsilon)\). □

Finally, we are ready to prove Theorem 4.1.

Proof of Theorem 4.1

By using Lemma 3.1, Lemma 4.1, and Lemma 4.2, we have

$$\begin{aligned}& \varepsilon^{-1}(\hat{r}_{n,\varepsilon}-r_{0}) = \frac{\Phi _{2}(n,\varepsilon)}{\phi_{1}(n,\varepsilon)}+\frac{\Phi _{3}(n,\varepsilon)}{\phi_{3}(n,\varepsilon)} \\& \quad \Rightarrow_{Q}\quad \frac{ (\int^{1}_{0}|\sigma (X_{s}^{0},s)|^{-4}[(b(X^{0}_{s},s)\sigma(X_{s}^{0},s))_{+}]^{2}\,ds )^{\frac {1}{2}}U_{1}}{\int^{1}_{0}\sigma^{-2}(X_{s}^{0},s)b^{2}(X^{0}_{s},s)\,ds} \\& \hphantom{\quad \Rightarrow_{Q}\quad}\quad {}-\frac{ (\int^{1}_{0}|\sigma (X_{s}^{0},s)|^{-4}[(b(X^{0}_{s},s)\sigma(X_{s}^{0},s))_{-}]^{2}\,ds )^{\frac {1}{2}}U_{2}}{\int^{1}_{0}\sigma^{-2}(X_{s}^{0},s)b^{2}(X^{0}_{s},s)\,ds} \end{aligned}$$

as \(n\rightarrow\infty\), \(n\varepsilon\rightarrow\infty\), \(\varepsilon n^{\frac{1}{2}}\rightarrow0\), and \(\varepsilon\rightarrow0\). This completes the proof. □

5 Conclusion

In the paper, we discuss parameter estimation problem for the mean-reversion stochastic differential equations driven by Brownian motion. As mentioned in the introduction, the SDEs of mean-reversion type involve a complex term \(\alpha(X_{t},t,\varepsilon)\) in the drift coefficient, which makes a big difficult to apply the existing methods directly. What we have done here is to utilize the celebrated Girsanov transformation (i.e., the drift transformation) to simplify the drift coefficient, which then changes the originally given probability measure P to a family of equivalent probability measures \(\{Q_{\varepsilon}\}_{\varepsilon>0}\). With this in hand, we are able to derive an explicit least square estimator and we can further prove the convergence from the least square estimator to the true value with respect to the family \(\{Q_{\varepsilon}\}_{\varepsilon>0}\) and we can show the asymptotic distribution of the least square estimator. The convergence discussed in this paper is with respect to a family of (equivalent) probability measures, which seems new as all the convergences appearing in the literature are only with respect to a single (i.e., the originally given) probability measure.

References

  1. Wu, J-L, Yang, W: Pricing CDO tranches in an intensity based model with the mean reversion approach. Math. Comput. Model. 52(5-6), 814-825 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  2. Øksendal, B: Stochastic Differential Equations: An Introduction with Applications. Springer, Berlin (2007)

    MATH  Google Scholar 

  3. Prakasa Rao, BLS: Statistical Inference for Diffusion Type Processes. Arnold, London; Oxford University Press, New York (1999)

    MATH  Google Scholar 

  4. Liptser, RS, Shiryaev, AN: Statistics of Random Processes. II: Applications, 2nd edn. Applications of Mathematics. Springer, Berlin (2001)

    Book  MATH  Google Scholar 

  5. Kutoyants, YA: Statistical Inference for Ergodic Diffusion Processes. Springer, London (2004)

    Book  MATH  Google Scholar 

  6. Dorogovcev, AJ: The consistency of an estimate of a parameter of a stochastic differential equation. Theory Probab. Math. Stat. 10, 73-82 (1976)

    Google Scholar 

  7. Le Breton, A: On continuous and discrete sampling for parameter estimation in diffusion type processes. Math. Program. Stud. 5, 124-144 (1976)

    Article  MathSciNet  Google Scholar 

  8. Kasonga, RA: The consistency of a nonlinear least squares estimator for diffusion processes. Stoch. Process. Appl. 30, 263-275 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  9. Prakasa Rao, BLS: Asymptotic theory for nonlinear least squares estimator for diffusion processes. Math. Oper.forsch. Stat., Ser. Stat. 14, 195-209 (1983)

    MathSciNet  MATH  Google Scholar 

  10. Shimizu, Y, Yoshida, N: Estimation of parameters for diffusion processes with jumps from discrete observations. Stat. Inference Stoch. Process. 9, 227-277 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  11. Shimizu, Y: M-Estimation for discretely observed ergodic diffusion processes with infinite jumps. Stat. Inference Stoch. Process. 9, 179-225 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  12. Kutoyants, YA: Parameter Estimation for Stochastic Process. Heldermann, Berlin (1984)

    MATH  Google Scholar 

  13. Kutoyants, YA: Identification of Dynamical Systems with Small Noise. Kluwer Academic, Dordrecht (1994)

    Book  MATH  Google Scholar 

  14. Uchida, M, Yoshida, N: Information criteria for small diffusions via the theory of Malliavin-Watanabe. Stat. Inference Stoch. Process. 7, 35-67 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  15. Yoshida, N: Conditional expansions and their applications. Stoch. Process. Appl. 107, 53-81 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  16. Yoshida, N: Asymptotic expansion of maximum likelihood estimators for small diffusions via the theory of Malliavin-Watanabe. Probab. Theory Relat. Fields 92, 275-311 (1992)

    Article  MATH  Google Scholar 

  17. Sørensen, H: Parameter inference for diffusion processes observed at discrete points in time: a survey. Int. Stat. Rev. 72, 337-354 (2004)

    Article  Google Scholar 

  18. Long, H: Least squares estimator for discretely observed Ornstein-Uhlenbeck processes with small Lévy noises. Stat. Probab. Lett. 79, 2076-2085 (2009)

    Article  MATH  Google Scholar 

  19. Long, H: Parameter estimation for a class of stochastic differential equations driven by small stable noises from discrete observations. Acta Math. Sci. Ser. B 30(3), 645-663 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  20. Ma, C: A note on ‘Least squares estimator for discretely observed Ornstein-Uhlenbeck processes with small Lévy noise’. Stat. Probab. Lett. 80, 1528-1531 (2010)

    Article  MATH  Google Scholar 

  21. Hu, Y, Long, H: Least squares estimator for Ornstein-Uhlenbeck processes driven by α-stable Lévy motions. Stoch. Process. Appl. 119, 2465-2480 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  22. Hu, Y, Long, H: On the singularity of least squares estimator for mean-reverting α-stable motions. Acta Math. Sci. Ser. B 28(3), 599-608 (2009)

    MathSciNet  MATH  Google Scholar 

  23. Kunitomo, N, Takahashi, A: The asymptotic expansion approach to the valuation of interest rate contingent claims. Math. Finance 11, 117-151 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  24. Takahashi, A: An asymptotic expansion approach to pricing contingent claims. Asia-Pac. Financ. Mark. 6, 115-151 (1999)

    Article  MATH  Google Scholar 

  25. Takahashi, A, Yoshida, N: An asymptotic expansion scheme for optimal investment problems. Stat. Inference Stoch. Process. 7, 153-188 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  26. Uchida, M, Yoshida, N: Asymptotic expansion for small diffusions applied to option pricing. Stat. Inference Stoch. Process. 7, 189-223 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  27. Yoshida, N: Asymptotic expansion for statistics related to small diffusions. J. Japan Statist. Soc. 22, 139-159 (1992)

    MathSciNet  MATH  Google Scholar 

  28. Ikeda, N, Watanabe, S: Stochastic Differential Equations and Diffusion Processes. North-Holland, Amsterdam (1981)

    MATH  Google Scholar 

  29. Mao, X: Stochastic Differential Equations and Their Applications. Horwood, Chichester (2008)

    Book  Google Scholar 

  30. Kallenberg, O: Some time change representations of stable integrals, via predictable transformations of local martingales. Stoch. Process. Appl. 40, 199-223 (1992)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by National Nature Science Foundation of China (Grant No. 71401124) and Youth Foundation of Tianjin University of Commerce (Grant No. 150108). The authors would like to thank the referees and the associated editor for their useful comments and suggestions, which greatly improved the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jingjie Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have made equal contributions. All authors have read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Wu, JL. On drift parameter estimation for mean-reversion type stochastic differential equations with discrete observations. Adv Differ Equ 2016, 90 (2016). https://doi.org/10.1186/s13662-016-0819-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-0819-1

MSC

Keywords