- Research
- Open access
- Published:
On drift parameter estimation for mean-reversion type stochastic differential equations with discrete observations
Advances in Difference Equations volume 2016, Article number: 90 (2016)
Abstract
We are concerned with a parameter estimation for mean-reversion type stochastic differential equations (SDEs) driven by Brownian motion. The equations, involving a small dispersion parameter, are observed at discrete (regularly spaced) time instants. The least square method is utilized to derive an asymptotically consistent estimator. We then discuss the rate of convergence of the least square estimator. The new feature of our study in this paper is that, due to the mean-reversion type drift coefficient in the SDEs, we have to use the Girsanov transformation to simplify the equations, which then gives rise to the corresponding convergence of the least square estimator being with respect to a family of probability measures indexed by the dispersion parameter, while in the literature the existing results have dealt with convergence with respect to a given probability measure.
1 Introduction
Let \((\Omega,\mathcal{F},P)\) be a complete probability space endowed with a usual filtration \(\{\mathcal{F}_{t}\}_{t\geq0}\), i.e., \(\mathcal{F}_{s}\subset\mathcal{F}_{t}\subset\mathcal{F}\) for \(0\leq s\leq t\leq1\) and \(\mathcal{F}_{0}\) contains all null set of P. We are interested in the unique solution, denoted by \(X=(X_{t})_{0\leq t\leq1}\), of the following stochastic differential equations (SDEs) of mean-reversion type:
with initial value \(X_{0}=x\in\mathbb{R}\), where r is a constant (which is supposed to be unknown), \(\varepsilon\in(0,1]\) is a parameter describing the smallness of dispersion, \(\alpha:(x,t,\varepsilon)\in\mathbb{R}\times[0,1]\times (0,1]\mapsto\alpha(x,t,\varepsilon)\in\mathbb{R}\) is twice differentiable with respect to x and differentiable with respect to t, both \(b:(x,t)\in\mathbb{R}\times[0,1]\mapsto b(x,t)\in\mathbb{R}\) and \(\sigma:(x,t)\in\mathbb{R}\times[0,1]\mapsto\sigma(x,t)\in \mathbb{R}\setminus\{0\}\) are continuous with respect to t, and \((B_{t})_{0\leq t\leq1}\) is a one dimensional \(\{\mathcal {F}_{t}\}\)-Brownian motion defined on the filtered probability space \((\Omega,\mathcal{F},\mathcal{P},\{\mathcal{F}_{t}\}_{0\leq t\leq1})\).
The above mean-reversion type SDE (with \(\varepsilon=1\)) was used widely in modeling price dynamics in mathematical studies of finance and economics. One feature of such a kind of modeling is that the function α in the drift coefficient satisfies a typical nonlinear parabolic partial differential equation - the Burgers equation; see, e.g., [1] and references therein.
In our present paper, we are interested in the discrete-time system of (1.1) starting with the initial \(X_{0}=x\). That is, for any fixed \(n\in\mathbb{N}\) and for any given partition \(0=t_{0}< t_{1}<\cdots<t_{i-1}<t_{i}<\cdots<t_{n}=1\), we define \(X_{t_{i}}\), \(i=1,2,\ldots,n\), via the Euler-Maruyama numerical scheme
for \(1\le k\le n\), where \(\Delta t_{i}=t_{i}-t_{i-1}\), \(i=1,\ldots, n\).
Before we proceed further, let us give a brief overview of the topic considered in the present paper. Historically, the theory of SDEs has been very well developed since the seminal work of the great Japanese mathematician Kiyosi Itô in the mid 1940s. Since then, SDEs had profound impacts on many areas in science and technology, cf. e.g. the excellent textbook [2] (and references therein). Nowadays, the theory of SDEs has played an important role in modeling uncertain and volatile systems arising in many diverse subjects such as economics and finance, biology, chemistry, ecology, physics, and so on. Fundamental issues are to estimate certain parameters (i.e., deterministic quantities) appearing in the random models by certain observations (or by experimental data). Estimating the drift parameter of SDEs is one of such important topics. In the past decades, there were many papers devoted to the drift parameter estimation for SDEs and there are two main methods; see for instance Prakasa Rao [3], Liptser and Shiryaev [4], Kutoyants [5] for the maximum likelihood estimator (MLE) method, and Dorogovcev [6], Le Breton [7], Kasonga [8] for the least square estimator (LSE) method. It turns out that both the MLE and the LSE are asymptotically equivalent and the LSE enjoys the strong consistency property under some regularity conditions. Moreover, Prakasa Rao [9] gave a study of the asymptotic distribution. Further, Shimizu and Yoshida [10] considered a multidimensional diffusion process with jumps whose jump term is driven by a compound Poisson process. They let \(\alpha(x,\theta)\) be the drift coefficient and \(b(x,\sigma)\) be the diffusion coefficient and study estimation of the parameter \(\alpha(x,\theta)=(\theta,\sigma)\). Under certain assumptions, the consistency and asymptotic normality of an estimator were shown. Shimizu [11] considered a similar case and proposed an estimating function with more complicated situation.
On the other hand, based on continuous time observations or discrete-time observations, there are correspondingly two kinds of drift parameter estimates. A parameter estimation for diffusion processes based on continuous-time observation can be found in, e.g., Kutoyants [12], Kutoyants [13], Uchida and Yoshida [14], and Yoshida [15, 16]), in particular, Uchida and Yoshida [14] considered the evaluation problem of statistical models for diffusion processes driven by a small noise. As for a parameter estimation based on discrete-time observations, Sørensen [17] gave an excellent survey of existing estimation techniques for stationary and ergodic diffusion processes observed at discrete points in time. It is more realistic and interesting to consider the parameter estimation for diffusion processes based on discrete observation, since the actual data may be obtained discretely. We begin with our study from this point of view.
Recently, the parameter estimation for mean-reversion type SDEs has received a lot of attention. Long [18] gave an investigation of the parameter estimation for discretely observations on one dimensional Ornstein-Uhlenbeck (O-U) processes driven by small Lévy noise. One assumed that the drift function \(b(x,\theta)=-\theta x\) is linear for both x and θ. Meanwhile the driving Lévy process was \(L_{t}=aB_{t}+bZ_{t}\), where a and b are known constants, \(\{B_{t},t\geq0\}\) is a standard Brownian motion and \(Z_{t}\) is a α-stable Lévy motion independent of \(\{B_{t},t\geq0\}\). In this framework, the author established the consistency and asymptotic normality for the proposed estimators. Long [19] investigated the parameter estimation problem for discrete observations driven by a small Levy noises and further gave a discussion of the case of the drift function \(b(x,\theta)=\theta b(x)\). Under some regularity conditions, the author obtained the consistency and rate of convergence of the least squares estimator when the small dispersion parameter \(\varepsilon\rightarrow0\) and \(n\rightarrow\infty\) simultaneously. In a similar framework, Ma [20] extended the results of Long [19] to the case where the driving noise was a general Lévy process. After that, by Hu and Long [21] a least square estimator for Ornstein-Uhlenbeck processes driven by α-stable motions was studied. The main focus of Hu and Long [21] is to study the strong consistency and asymptotic distributions of the least square estimator for generalized O-U processes. After obtaining a least square estimator, the authors proved the strong consistency result and obtained further the rate of convergence of the estimator and an asymptotic distribution. Hu and Long [22] extended the results of [21] to the case that the drift function \(b(x,\theta)=\alpha_{0}-\theta_{0} x\). When \(\alpha_{0}=0\), the mean-reverting α-stable motion becomes an O-U process. Under certain conditions, by using a least square method, they showed the consistency property and obtained the asymptotic distribution.
There are many applications of small noise asymptotic to mathematical finance; see, e.g., Kunitomo and Takahashi [23], Long [19], Takahashi [24], Takahashi and Yoshida [25], Uchida and Yoshida [26], and Yoshida [27]. We mention particularly Kunitomo and Takahashi [23] who proposed a new methodology for the valuation problem of financial contingent claims when the underlying asset prices follow a general class of continuous Itô processes. Furthermore, the authors gave two interesting examples of the valuation problems of average options for interest rates.
In the present paper, we consider a fairly general class of stochastic processes solving SDEs of mean-reversion type (1.1). We aim to investigate the least square estimator for the true value of r based on the (discrete) sampling data \((X_{t_{i}})^{n}_{i=1}\). Comparing to the existing studies in the literature, a major difficulty in our case is the appearance of the item \(\alpha(X_{t},t,\varepsilon)\) in the drift coefficient of (1.1). The feature of our study is the utilizing of the Girsanov transformation, which makes it possible to simplify the drift coefficient, therefore the equation, under a family of probability measures indexed by the small dispersion parameter \(\varepsilon\in(0,1]\). As such, our consideration creates a novel point: all involved convergences are with respect to the ε indexed family of probability measures which, to our knowledge, has not been considered in the literature. For the Girsanov transformation for SDEs, the reader is referred to Øksendal [2]. In our case here, we apply the Girsanov transformation to get rid of the term \(\alpha(X_{t},t,\varepsilon)\), which changes the original probability measure P to a family of (equivalent) probability measures \(Q_{\varepsilon}\). We then derive explicitly the least square estimators which also depend on ε and therefore indexed by ε for the simplified SDE. From this we can prove, under certain conditions, the convergence of the least square estimators under the family \(Q_{\varepsilon}\) to a limit which turns out to be the true value of the parameter r. We end up by studying the convergence rate and derive further the asymptotic distribution under a new equivalent probability measure.
The paper is organized as follows. In the next section, we give some preliminaries, starting with the discretization of our equation (1.1) is presented, which will be used in later derivations and proofs. In Section 3, we focus on showing the convergence of the estimators with high frequency and small dispersion simultaneously. Section 4 is devoted to establishing the rate of the relevant convergence and the associated asymptotic distribution. To finish, the last section draws a conclusion.
2 Preliminaries and auxiliary results
In this section, we start with an introduction of the SDEs of mean-reversion type and then we impose some assumptions on our SDE (1.1) to ensure the existence and uniqueness of the solution. Furthermore, we introduce the Girsanov transformation to simplify our equation (1.1). On the other hand, in order to obtain the consistency and asymptotics of our LSE \(\hat {r}_{n,\varepsilon}\), an explicit form of \(\hat{r}_{n,\varepsilon}\) is defined. In addition, we would like to present some notations and preliminaries which will be needed in the latter sections of the paper. Throughout the paper, we use notation ‘\(\rightarrow_{Q}\)’ to denote ‘convergence in probability Q’; notation ‘\(\rightarrow_{P}\)’ to denote ‘convergence in probability P’ and notation ‘⇒’ to denote ‘convergence in distribution’.
We assume the coefficients of (1.1) fulfill the following conditions throughout the paper:
-
(C.1)
∃ a constant \(L>0\) such that \(|b(x,t)-b(y,t)|\leq L|x-y|\), for \(x,y\in\mathbb{R}\), \(t\in[0,1]\);
-
(C.2)
∃ a constant \(L''>0\) such that, for \(x,y\in\mathbb {R}\), \(t\in[0,1]\) and \(\varepsilon\in(0,1]\),
$$\bigl\vert \alpha(x,t,\varepsilon)b(x,t)-\alpha (y,t,\varepsilon)b(y,t)\bigr\vert \leq L''|x-y|; $$ -
(C.3)
∃ a constant \(L'>0\) such that \(|\sigma(x,t)-\sigma (y,t)|\leq L'|x-y|\), for \(x,y\in\mathbb{R}\), \(t\in[0,1]\);
-
(C.4)
∃ constants \(K'>0\), \(m>0\) such that \(\sigma^{-2}(x,t)\leq K'(1+|x|^{m})\), for \(x\in\mathbb{R}\), \(t\in[0,1]\).
The above conditions guarantee (see, e.g., [1]) the existence of a unique solution of (1.1) for a given initial data \(X_{0}=x\in\mathbb{R}\). The celebrated Girsanov transformation (also called the transformation of the drift) provides a very useful and efficient approach to solve (1.1). The transformation says the following.
Let \(u:\mathbb{R}\times[0,1]\to\mathbb{R}\) satisfy the following Novikov condition:
Then, by the Girsanov theorem (cf. e.g. Theorem IV in Section 4.1 of [28]),
is an \(\{\mathcal{F}_{t}\}_{t\in[0,1]}\)-martingale. Furthermore, for \(t\in [0,1]0\), we define
or equivalently, in terms of the Radon-Nikodym derivative,
Then taking \(Q_{1}\) in particular, we see that
is an \(\{\mathcal{F}_{t}\}_{t\in[0,1]}\)-Brownian motion under the probability \(Q_{1}\). Moreover, the solution process \(X_{t}\) of (1.1) solves the following SDE:
In order to get rid of the term \(\alpha(X_{t},t,\varepsilon)\) for \(\varepsilon\in(0,1]\), we can specify u such that
and hence
and our equation (2.1) becomes
Thus, we set
so that
The Novikov condition for this \(u_{\varepsilon}\) is then the following:
which is needed to put on the coefficients of the original equation (1.1). Then we define
where \(M^{\varepsilon}_{t}\) is an \(\{\mathcal{F}_{t}\}_{t\in [0,1]}\)-martingale. For each \(\varepsilon\in(0,1]\), let \(Q_{\varepsilon}\) be a probability measure on \(\mathcal{F}_{1}\), satisfying
Then we define
where \(\hat{B}^{\varepsilon}_{t}\) is an \(\{\mathcal{F}_{t}\}_{t\in[0,1]}\)-Brownian motion with respect to the probability measure \(Q_{\varepsilon}\). Then we arrive at equation (2.2) for \(X_{t}\), that is,
which, from now, we will focus on. We denote the true value of the parameter r by \(r_{0}\) and the least square estimator of r by r̂. As mentioned before we focus on the investigation of the least square estimator for the true value \(r_{0}\) based on the (discrete) sampling data \((X_{t_{i}})^{n}_{i=1}\) obtained by the Euler-Maruyama numerical scheme for the Cauchy problem of equation (2.2) with initial \(X_{0}=x\). For simplicity, we assume that the process \(X_{t}\) is observed at regularly spaced time points \(0=t_{0}< t_{1}<\cdots<t_{i-1}<t_{i}<\cdots<t_{n}=1\) with \(\{t_{i}=\frac{i}{n}, i=0,1,2,\ldots,n\}\), where \(n\in\mathbb{N}\) is arbitrarily fixed. That is,
Let us start with the use of the least square method to get a consistent estimator. We first discretize (2.2) as follows:
where \(\Delta t_{i}=t_{i}-t_{i-1}=\frac{1}{n}\); \(\Delta\hat{B}^{\varepsilon}_{t_{i}}=\hat{B}^{\varepsilon}_{t_{i}}-\hat {B}^{\varepsilon}_{t_{i-1}}\) is the increment of Brownian motion. Then
Since \(\Delta\hat{B}^{\varepsilon}_{t_{i}}\) is a normal distribution with zero mean on the new probability space \((\Omega,\mathcal {F},Q_{\varepsilon})\), we obtain the variance of \(\Delta\hat{B}^{\varepsilon}_{t_{i}}\) and denote it by the following contrast function:
In order to get the least square estimator \(\hat{r}_{n,\varepsilon}\), we let
from which we get the solution, denoted by \(\hat{r}_{n,\varepsilon}\), which is given as
With all these in hand, we now present several auxiliary results which are needed for our later considerations.
Definition 2.1
Let \(X^{0}_{t}\) be the solution of the following ordinary differential equation under the true value of the drift parameter:
where \(r_{0}\) be the real value of r.
The following lemma is a reformulation of Theorem 7.3 of [29] (see also [28] or [2]), which is nothing but the Burkholder-Davis-Gundy inequality in our setting.
Lemma 2.1
Let \(g\in\mathcal {L}^{2}(\mathbb{R}_{+};\mathbb{R})\). Define, for \(0\leq t\leq T\),
and
Then, for every \(p>0\), there exist universal positive constants \(c_{p}\), \(C_{p}\) (depending only on p), such that
for all \(t\geq0\). In particular, one may take \(c_{p}=(p/2)^{p}\), \(C_{p}=(32/p)^{p/2}\), if \(0< p<2\); \(c_{p}=1\), \(C_{p}=4\), if \(p=2\); \(c_{p}=(2p)^{-p/2}\), \(C_{p}=[p^{p+1}/2(p-1)^{p-1}]^{p/2}\), if \(p>2\).
Let us next show the following two lemmas which will be used in Section 3 and Section 4.
Lemma 2.2
Under conditions (C.1), (C.2), (C.3), we have
Proof
We have
From (2.2) we have
Together with (2.11) and (2.12), we obtain
By condition (C.1), we get
By the Gronwall inequality, we get
 □
Lemma 2.3
Under conditions (C.1), (C.2), (C.3), we have
Proof
By Lemma 2.2,
Let \(\eta>0\), by Lemma 2.1, the Markov inequality, and condition (C.4), we have
Since \(\sigma(x,t)\) satisfies the Lipschitz condition and is continuous with respect to t, \(\sigma(x,t)\) meets the following linear growth condition:
where \(K>0\) is a constant. Then we have
By the Hölder inequality, the Itô isometry, and the Gronwall inequality, we get \(E_{Q_{\varepsilon}}|X_{s}|^{2}\leq C\), where \(C>0\) is a constant. Then we have
The above equation implies that
 □
3 Consistency of the least square estimator
This section is devoted to prove the consistency of the least square estimator \(\hat{r}_{n,\varepsilon}\). In the following, we first derive an explicit decomposition for \(\hat{r}_{n,\varepsilon}\). Based on this, we will show the strong consistency of the LSE \(\hat{r}_{n,\varepsilon}\). Note from (2.2), that we have
Combining the above with (2.8) then yields the following decomposition:
Our first main result is the following theorem.
Theorem 3.1
We have \(\hat{r}_{n,\varepsilon}\rightarrow_{Q_{\varepsilon}} r_{0}\), as \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).
To prove Theorem 3.1, the following lemmas are needed. We shall study the asymptotic behavior of \(\phi_{1}(n,\varepsilon)\), \(\phi_{2}(n,\varepsilon)\), and \(\phi _{2}(n,\varepsilon)\), respectively.
Lemma 3.1
Under conditions (C.1)-(C.4), we have
as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\).
Proof
We first have
For \(\phi_{1,1}(n,\varepsilon)\), according to the definition of Riemann integral, we obtain, as \(n\rightarrow\infty\),
For \(\phi_{1,2}(n,\varepsilon)\), we have the following derivation:
By condition (C.1), we have
By Lemma 2.3, we get \(\phi_{1,2}^{1}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(\varepsilon\rightarrow0\). Next, we have
By Lemma 2.3, we have \(\phi_{1,2}^{2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). For \(\phi _{1,2}^{3}(n,\varepsilon)\), since
where \(m\geq1\). By condition (C.1), we have
By Lemma 2.3, we obtain \(\phi_{1,2}^{3}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). And by the same argument, we get \(\phi_{1,2}^{4}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). Thus we get \(\phi_{1,2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\). For \(\phi_{1,3}(n,\varepsilon)\), by conditions (C.1), (C.4), and (3.2), we have
We then get \(\phi_{1,3}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). □
Next, we turn to the study of the asymptotic behavior of \(\phi _{2}(n,\varepsilon)\).
Lemma 3.2
For both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\),
Proof
From (2.12), we first show
By Gronwall’s inequality, we obtain
This then yields
By conditions (C.1) and (C.4), we have
For \(\phi_{2,1}(n,\varepsilon)\), by condition (C.1) and (3.2), we have
By Lemma 2.3, it is clear to see that \(\phi_{2,1}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\).
As for \(\phi_{2,2}(n,\varepsilon)\), we have
For \(\phi^{1}_{2,2}(n,\varepsilon)\), by the Markov inequality, the Hölder inequality, the Gronwall inequality, and Lemma 2.1, for any given \(\gamma>0\), we have
It then implies that \(\phi^{1}_{2,2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). With similar derivations, through using Lemma 2.3, we get furthermore \(\phi ^{2}_{2,2}(n,\varepsilon)\), \(\phi^{3}_{2,2}(n,\varepsilon)\), \(\phi^{4}_{2,2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\). □
Finally, let us turn to the asymptotic behavior of \(\phi _{3}(n,\varepsilon)\). We have the followingr result.
Lemma 3.3
Under conditions (C.1)-(C.4), we have \(\phi_{3}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{1/2}\rightarrow0\).
Proof
By conditions (C.1) and (C.4), and (3.2), we have
For \(\phi_{3,1}(n,\varepsilon)\), by the Markov inequality and Lemma 2.1, we have, for any given \(\gamma>0\),
The above further implies that \(\phi_{3,1}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\) simultaneously. Here, we remark that the new condition \(\varepsilon n^{\frac{1}{2}}\rightarrow0\) has been added to have convergence. In the same manner, we can further obtain \(\phi_{3,2}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\); \(\phi_{3,3}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\); \(\phi_{3,4}(n,\varepsilon)\rightarrow_{Q_{\varepsilon}}0\) as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\). □
Now we are in the position to prove Theorem 3.1.
Proof of Theorem 3.1
Let \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\). By using Lemma 3.1, Lemma 3.2, and Lemma 3.3, we have therefore
This completes the proof. □
4 Asymptotics of the least square estimator
Our aim of this section is to study the asymptotic behavior of the least square estimator. For the sake of simplicity, we assume that \(\alpha(x,t,\varepsilon)=\varepsilon\alpha(x,t)\), so that \(Q_{\varepsilon}=Q\) is independent of ε. The main result of this section is formulated as follows.
Theorem 4.1
There exist two independent Q-random variables \(U_{1}\) and \(U_{2}\) with the standard normal distribution \(N(0,1)\) such that
as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), \(n\varepsilon\rightarrow\infty\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\) simultaneously.
To show Theorem 4.1, by Theorem 3.1, we could rewrite \(\varepsilon ^{-1}(\hat{r}_{n,\varepsilon}-r_{0})\) as follows:
Theorem 4.1 will be proved by verifying the following lemmas, in which we shall discuss the asymptotic behaviors of \(\Phi_{i}({n,\varepsilon })\), \(i=2,3\), respectively.
Lemma 4.1
Under conditions (C.1)-(C.4), we have \(\Phi_{2}({n,\varepsilon})\rightarrow_{Q}0\) as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(n\varepsilon\rightarrow\infty\) simultaneously.
Proof
From Lemma 3.3, by (3.3), we have
By (3.4), it is easy to see that \(\Phi_{2,1}({n,\varepsilon})\rightarrow_{Q}0\) as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(n\varepsilon\rightarrow\infty\). Notice that we have the new condition \(n\varepsilon\rightarrow\infty\) for convergence. Similarly, \(\Phi_{2,2}({n,\varepsilon})\rightarrow_{Q}0\) as both \(n\rightarrow \infty\) and \(\varepsilon\rightarrow0\). This then completes the proof of Lemma 4.1. □
Next, let us turn to the consideration of the asymptotic behavior of \(\Phi_{3}(n,\varepsilon)\).
Lemma 4.2
Under conditions (C.1)-(C.4), we have
as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).
Proof
We have
Let us set \(V(s)\) (which is deterministic) as follows:
Let \(V_{+}(s)\) and \(V_{-}(s)\) be the positive and negative parts of \(V(s)\), respectively. Then by Theorem 4.1 of Kallenberg [30], there exist two independent Q-Brownian motions \(\hat{B}'\) and \(\hat{B}''\), which have the same distribution of BÌ‚, such that
Note that
and
Then we have
and
as \(n\rightarrow\infty\). Then
and
So we get, as \(n\rightarrow\infty\),
For \(\Phi_{3,2}(n,\varepsilon)\), by condition (C.3), the Markov inequality, and Lemma 2.1, Lemma 2.2, for any given \(\gamma>0\), we have
which tends to zero as both \(n\rightarrow\infty\) and \(\varepsilon\rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).
For \(\Phi_{3,3}(n,\varepsilon)\), we have
For \(\Phi^{1}_{3,3}(n,\varepsilon)\), by condition (C.1), we have
By the Markov inequality and Lemma 2.1, for any given \(\gamma>0\), we get
which tends to zero as \(n\rightarrow\infty\) and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).
For \(\Phi^{2}_{3,3}(n,\varepsilon)\), by condition (C.1), Lemma 2.3, and the same arguments as used in (4.1), we find
which converges to zero as both \(n\rightarrow\infty\) and \(\varepsilon \rightarrow0\) with \(\varepsilon n^{\frac{1}{2}}\rightarrow0\). Hence,
as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).
For \(\Phi_{3,4}(n,\varepsilon)\), by conditions (C.1)-(C.4), and (3.2), we have
From the method of the convergence of \(\Phi_{3,3}(n,\varepsilon)\), we get \(\Phi^{1}_{3,4}(n,\varepsilon)\rightarrow_{Q}0\) and \(\Phi^{3}_{3,4}(n, \varepsilon)\rightarrow_{Q}0\) as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).
For \(\Phi^{2}_{3,4}(n,\varepsilon)\), we have
which converges to zero in probability, since \(m\geq1\), \(\sup_{0\leq t\leq1}|X_{t_{i-1}}-X^{0}_{t_{i-1}}|^{m}\) converges to zero in probability as \(\varepsilon\rightarrow0\). Hence, \(\Phi_{3,4}(n,\varepsilon)\rightarrow_{Q}0\) as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\).
For \(\Phi_{3,5}(n,\varepsilon)\), for some \(G>0\) we have
which converges to zero in probability Q, since \(\sup_{0\leq t\leq1}|X_{t_{i-1}}-X^{0}_{t_{i-1}}|\rightarrow_{Q}0\) as \(\varepsilon\rightarrow0\) by Lemma 2.3, and
as \(n\rightarrow\infty\), \(\varepsilon\rightarrow0\), and \(\varepsilon n^{\frac{1}{2}}\rightarrow0\) by the same arguments for the convergence of \(\Phi_{3,4}(n,\varepsilon)\). □
Finally, we are ready to prove Theorem 4.1.
Proof of Theorem 4.1
By using Lemma 3.1, Lemma 4.1, and Lemma 4.2, we have
as \(n\rightarrow\infty\), \(n\varepsilon\rightarrow\infty\), \(\varepsilon n^{\frac{1}{2}}\rightarrow0\), and \(\varepsilon\rightarrow0\). This completes the proof. □
5 Conclusion
In the paper, we discuss parameter estimation problem for the mean-reversion stochastic differential equations driven by Brownian motion. As mentioned in the introduction, the SDEs of mean-reversion type involve a complex term \(\alpha(X_{t},t,\varepsilon)\) in the drift coefficient, which makes a big difficult to apply the existing methods directly. What we have done here is to utilize the celebrated Girsanov transformation (i.e., the drift transformation) to simplify the drift coefficient, which then changes the originally given probability measure P to a family of equivalent probability measures \(\{Q_{\varepsilon}\}_{\varepsilon>0}\). With this in hand, we are able to derive an explicit least square estimator and we can further prove the convergence from the least square estimator to the true value with respect to the family \(\{Q_{\varepsilon}\}_{\varepsilon>0}\) and we can show the asymptotic distribution of the least square estimator. The convergence discussed in this paper is with respect to a family of (equivalent) probability measures, which seems new as all the convergences appearing in the literature are only with respect to a single (i.e., the originally given) probability measure.
References
Wu, J-L, Yang, W: Pricing CDO tranches in an intensity based model with the mean reversion approach. Math. Comput. Model. 52(5-6), 814-825 (2010)
Øksendal, B: Stochastic Differential Equations: An Introduction with Applications. Springer, Berlin (2007)
Prakasa Rao, BLS: Statistical Inference for Diffusion Type Processes. Arnold, London; Oxford University Press, New York (1999)
Liptser, RS, Shiryaev, AN: Statistics of Random Processes. II: Applications, 2nd edn. Applications of Mathematics. Springer, Berlin (2001)
Kutoyants, YA: Statistical Inference for Ergodic Diffusion Processes. Springer, London (2004)
Dorogovcev, AJ: The consistency of an estimate of a parameter of a stochastic differential equation. Theory Probab. Math. Stat. 10, 73-82 (1976)
Le Breton, A: On continuous and discrete sampling for parameter estimation in diffusion type processes. Math. Program. Stud. 5, 124-144 (1976)
Kasonga, RA: The consistency of a nonlinear least squares estimator for diffusion processes. Stoch. Process. Appl. 30, 263-275 (1988)
Prakasa Rao, BLS: Asymptotic theory for nonlinear least squares estimator for diffusion processes. Math. Oper.forsch. Stat., Ser. Stat. 14, 195-209 (1983)
Shimizu, Y, Yoshida, N: Estimation of parameters for diffusion processes with jumps from discrete observations. Stat. Inference Stoch. Process. 9, 227-277 (2006)
Shimizu, Y: M-Estimation for discretely observed ergodic diffusion processes with infinite jumps. Stat. Inference Stoch. Process. 9, 179-225 (2006)
Kutoyants, YA: Parameter Estimation for Stochastic Process. Heldermann, Berlin (1984)
Kutoyants, YA: Identification of Dynamical Systems with Small Noise. Kluwer Academic, Dordrecht (1994)
Uchida, M, Yoshida, N: Information criteria for small diffusions via the theory of Malliavin-Watanabe. Stat. Inference Stoch. Process. 7, 35-67 (2004)
Yoshida, N: Conditional expansions and their applications. Stoch. Process. Appl. 107, 53-81 (2003)
Yoshida, N: Asymptotic expansion of maximum likelihood estimators for small diffusions via the theory of Malliavin-Watanabe. Probab. Theory Relat. Fields 92, 275-311 (1992)
Sørensen, H: Parameter inference for diffusion processes observed at discrete points in time: a survey. Int. Stat. Rev. 72, 337-354 (2004)
Long, H: Least squares estimator for discretely observed Ornstein-Uhlenbeck processes with small Lévy noises. Stat. Probab. Lett. 79, 2076-2085 (2009)
Long, H: Parameter estimation for a class of stochastic differential equations driven by small stable noises from discrete observations. Acta Math. Sci. Ser. B 30(3), 645-663 (2010)
Ma, C: A note on ‘Least squares estimator for discretely observed Ornstein-Uhlenbeck processes with small Lévy noise’. Stat. Probab. Lett. 80, 1528-1531 (2010)
Hu, Y, Long, H: Least squares estimator for Ornstein-Uhlenbeck processes driven by α-stable Lévy motions. Stoch. Process. Appl. 119, 2465-2480 (2009)
Hu, Y, Long, H: On the singularity of least squares estimator for mean-reverting α-stable motions. Acta Math. Sci. Ser. B 28(3), 599-608 (2009)
Kunitomo, N, Takahashi, A: The asymptotic expansion approach to the valuation of interest rate contingent claims. Math. Finance 11, 117-151 (2001)
Takahashi, A: An asymptotic expansion approach to pricing contingent claims. Asia-Pac. Financ. Mark. 6, 115-151 (1999)
Takahashi, A, Yoshida, N: An asymptotic expansion scheme for optimal investment problems. Stat. Inference Stoch. Process. 7, 153-188 (2004)
Uchida, M, Yoshida, N: Asymptotic expansion for small diffusions applied to option pricing. Stat. Inference Stoch. Process. 7, 189-223 (2004)
Yoshida, N: Asymptotic expansion for statistics related to small diffusions. J. Japan Statist. Soc. 22, 139-159 (1992)
Ikeda, N, Watanabe, S: Stochastic Differential Equations and Diffusion Processes. North-Holland, Amsterdam (1981)
Mao, X: Stochastic Differential Equations and Their Applications. Horwood, Chichester (2008)
Kallenberg, O: Some time change representations of stable integrals, via predictable transformations of local martingales. Stoch. Process. Appl. 40, 199-223 (1992)
Acknowledgements
This research was supported by National Nature Science Foundation of China (Grant No. 71401124) and Youth Foundation of Tianjin University of Commerce (Grant No. 150108). The authors would like to thank the referees and the associated editor for their useful comments and suggestions, which greatly improved the manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors have made equal contributions. All authors have read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Li, J., Wu, JL. On drift parameter estimation for mean-reversion type stochastic differential equations with discrete observations. Adv Differ Equ 2016, 90 (2016). https://doi.org/10.1186/s13662-016-0819-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-016-0819-1