Skip to main content

Advertisement

Maximum likelihood estimation for stochastic Lotka–Volterra model with jumps

Article metrics

  • 557 Accesses

Abstract

In this paper, we consider the stochastic Lotka–Volterra model with additive jump noises. We show some desired properties of the solution such as existence and uniqueness of positive strong solution, unique stationary distribution, and exponential ergodicity. After that, we investigate the maximum likelihood estimation for the drift coefficients based on continuous time observations. The likelihood function and explicit estimator are derived by using semimartingale theory. In addition, consistency and asymptotic normality of the estimator are proved. Finally, computer simulations are presented to illustrate our results.

Introduction

The following famous population dynamics

$$ dX_{t} = X_{t}(a-bX_{t}) \,dt $$

is often used to model population growth of a single species, where \(X_{t}\) represents its population size at time t, \(a>0\) is the rate of growth, and \(b>0\) represents the effect of intraspecies interaction. This equation is also known as the Lotka–Volterra model or logistic equation. In this paper, we consider one-dimensional stochastic Lotka–Volterra equation with both multiplicative Brownian noises and additive jump noises, that is,

$$\begin{aligned} \textstyle\begin{cases} dX_{t} = X_{t} (a-bX_{t}) \,dt + \sigma X_{t} \,dW_{t} + r \,dJ_{t}, & \text{for all $t \geq0$}, \\ X_{0} =x_{0}, & \text{a.s.,} \end{cases}\displaystyle \end{aligned}$$
(1.1)

where \(x_{0}\) is a positive initial value, a, b, σ, \(r \in (0, \infty)\), \((W_{t})_{t \geq0}\) is a one-dimensional Brownian motion (which is also known as the Wiener process), and \((J_{t})_{t \geq0}\) is a one-dimensional subordinator independent of \((W_{t})_{t \geq0}\) (the precise characterization is given below in Sect. 2). The “a.s.” above is the abbreviation of “almost surely”. Suppose that σ and r are known parameters, while a and b are unknown parameters. We will focus on the maximum likelihood estimation (MLE) of the parameter \(\theta=(a,b)' \in{\mathbb {R}}_{++}^{2}\) based on the continuous time observations of the path \(X^{T}:=(X_{t})_{0 \leq t \leq T}\). Here and after, ′ denotes the transposition of a vector or a matrix.

Stochastic Lotka–Volterra equation, being a reasonable and popular approach to model population dynamics perturbed by random environment, has recently been studied by many authors both from a mathematical perspective and in the context of real biological dynamics. For the mathematical studies, see, for example, [18]. In particular, Mao et al. [3] investigated a multi-dimensional stochastic Lotka–Volterra system driven by one-dimensional standard Brownian motion. They revealed that the environmental noise could suppress population explosion. Later, Mao [4] proved a finite second moment of the stationary distribution under Brownian noise, which is very important in application.

The other case is the stochastic dynamics with Lévy noise, which can be used to describe the sudden environmental shocks, e.g., earthquakes, hurricanes, and epidemics. Bao et al. [5] considered a competitive Lotka–Volterra population model with Lévy jumps, see also Bao et al. [6]. Recently, Zhang et al. [8] considered a stochastic Lotka–Volterra model driven by α-stable noise, they got a unique positive strong solution of their model. Moreover, they proved stationary property and exponential ergodicity under relatively small noise and extinction under large enough noise.

We note that our equation (1.1) cannot be covered by [5, 6, 8]. The proof of positive solution in [5, 8] heavily depends on the explicit solution of the corresponding equation. This method does not work for our equation (1.1). We turn to prove that the hitting time of point 0 of the solution is almost surely infinite. We also prove that stationary property and exponential ergodicity do not depend on the weight of the noise, which is different from the conditions needed in [5, 6, 8]. From this point of view, equation (1.1) has its own interest.

On the other hand, the study of the influence of noise is active in the context of real ecosystems. The influence of noise is of paramount importance in open systems, and many noise induced phenomena have been found, like stochastic resonance, noise enhanced stability, noise delayed extinction, and so on. For more details, see, for example, [911]. However, in this paper, we shall mainly study our equation (1.1) from the view of mathematics.

We notice that there are huge works in economics and finance considering MLE of jump-diffusion models, where the data is usually observed discretely. In this case, transition densities play an important role, but their closed-form expressions cannot be obtained in general. It will be computationally expensive to conduct MLE. To overcome the difficulty, a popular method is to use closed-form expansions to approximate transition densities. To deepen this topic, we refer the reader to [1214] and the references therein. The situation of this paper is different from the topic mentioned above. We focus on the MLE of equation (1.1) with regard to the continuous observations. The main difficulty is how to check the existence of the likelihood function. After that, we can get our MLE explicitly.

Our motivation also comes from the problem of parameter estimation for jump-type CIR (Cox–Ingersoll–Ross) process as in Barczy et al. [15] (for related topics, see, e.g., Li et al. [16]). The authors considered the following jump-type CIR process:

$$ \textstyle\begin{cases} dX_{t} = (a-bX_{t}) \,dt + \sqrt{X_{t}} \,dW_{t} + dJ_{t}, & \text{for all $t \geq0$}, \\ X_{0} =x_{0}, & \text{a.s.,} \end{cases} $$

where \((W_{t})_{t \geq0}\) and \((J_{t})_{t \geq0}\) are the same as in equation (1.1). By using the Laplace transform of the process \(\int_{0}^{t} X_{s} \,ds\), \(t \geq0\), they proved the asymptotic properties of MLE of b under different cases. As the authors pointed out, the asymptotic property for MLE of a or joint MLE of \((a,b)\) is still open because of the lack of the necessary explicit Laplace transform \(\int_{0}^{t} 1/X_{s} \,ds\), \(t \geq0\). By studying equation (1.1), we wish to bring some light to this question. For other topics of statical inference for stochastic processes, the reader can refer to the excellent monograph [17].

The rest of this paper is organized as follows. In Sect. 2, we firstly prove the existence of a unique strong positive solution of equation (1.1). After that, we derive the unique stationary distribution and the exponential ergodicity of the solution. In Sect. 3, joint MLE of parameter \(\theta=(a,b)'\) is deduced from the theory of semimartingale. We prove the strong consistency and asymptotic normality in Sect. 4. In Sect. 5, we illustrate our results by computer simulations.

Preliminaries

Let \((\Omega,{\mathcal {F}}, ({\mathcal {F}}_{t})_{t\geq0}, {\mathbb {P}})\) be a filtered probability space with the filtration \(({\mathcal {F}}_{t})_{t\geq0}\) satisfying the usual conditions. Equation (1.1) will be considered in this probability space. Let \((W_{t})_{t\geq0}\) in equation (1.1) be a Wiener process. We assume that the jump process \((J_{t})_{t \geq0}\) in equation (1.1) is a subordinator with zero drift. That is, its characteristic function takes the form

$$ {\mathbb {E}}\bigl(e^{i u J_{t}}\bigr)= t \int_{0}^{\infty}\bigl(e^{iuz}-1\bigr) \nu(dz), $$
(2.1)

where ν is the Lévy measure concentrated on \((0,\infty)\) satisfying

$$ \int_{0}^{\infty}(z\wedge1) \nu(dz) < \infty. $$
(2.2)

We recall that a subordinator is an increasing Lévy process. For example, the Poisson process, α-stable subordinators, and gamma subordinators are all of this type; for more details, see, e.g., Applebaum [18] p. 52–54. Moreover, we suppose that \((W_{t})_{t\geq0}\) and \((J_{t})_{t \geq0}\) in (1.1) are independent. Let \(N(dt,dz)\) be the random measure associated with the subordinator \((J_{t})_{t \in{\mathbb {R}}_{+}}\), that is,

$$ N(dt,dz):=\sum_{u \geq0} 1_{\{\Delta J_{u}(\omega) \neq0\}} \delta _{(u,\Delta J_{u}(\omega))}(dt,dz), $$

where \(\delta_{p}\) is the Dirac measure at point p. Let \(\tilde {N}(dt,dz):= N(dt,dz)-\nu(dz)\,dt\). Then, for \(t \in{\mathbb {R}}_{+} \), we can write equation (1.1) as

$$\begin{aligned} X_{t} =& x_{0} + \int_{0}^{t} (a-bX_{s}) X_{s} \,ds + \int_{0}^{t} \int_{0}^{\infty}rz \nu(dz)\,ds \\ &{}+ \int_{0}^{t} \sigma X_{s} \,dW_{s} + \int_{0}^{t} \int_{0}^{\infty}r z \tilde{N}(ds,dz). \end{aligned}$$
(2.3)

The following assumptions are needed.

(A1):

a, b, σ, \(r \in(0, \infty)\) and \(\int _{0}^{\infty}z \nu(dz) < \infty\).

(A2):

\(\int_{0}^{\infty}z^{2} \nu(dz) < \infty\).

Throughout this paper, we write \({\mathbb {R}}\), \({\mathbb {R}}_{+}\), and \({\mathbb {R}}_{++}\) for real numbers, nonnegative real numbers, and positive real numbers, respectively. The value of constant C with or without subscript may vary from line to line. First, we prove there is a unique strong positive solution for equation (2.3).

Proposition 2.1

Assume that (A1) holds. Then, for any \(x_{0} \in{\mathbb {R}}_{++}\), there is a unique strong solution \((X_{t})_{t\in{\mathbb {R}}_{+}}\) of equation (2.3) such that \({\mathbb {P}}(X_{0}=x_{0})=1\) and \({\mathbb {P}}(X_{t} \in{\mathbb {R}}_{++} \textit{ for all } t\in{\mathbb {R}}_{+})=1\).

Proof

Since the coefficients of equation (2.3) are locally Lipschitz continuous, for the given initial value \(x_{0} \in{\mathbb {R}}_{++}\), there is a unique solution \(X_{t}\) on \([0, \tau_{e})\), where \(\tau_{e}\) is the explosion time. In the following, we shall prove that the solution is nonexplosive and positive. The proof below is divided into two steps.

Step 1: We show that the solution of (2.3) is nonexplosive. That is, \(\tau_{e} = \infty\) a.s. To this end, let \(k_{0}\) be a sufficiently large real number such that \(x_{0} < k_{0}\). For each integer \(k > k_{0}\), define the stopping time

$$ \tau_{k} := \inf\bigl\{ t \in[0, \tau_{e}): X_{t} > k\bigr\} , $$

and we set \(\inf\{\emptyset\}=\infty\) by invention. It is easy to see that \(\tau_{k}\) is increasing as \(k \to\infty\). Let \(\tau_{\infty}= \lim_{k \to\infty} \tau_{k}\), then \(\tau_{\infty}\leq\tau_{e}\) a.s. If we can prove \(\tau_{\infty}= \infty\) a.s., then \(\tau_{e} = \infty\) a.s. Let \(T>0\) be arbitrary. For any \(0 \leq t \leq T\), we have

$$\begin{aligned} X_{t\wedge\tau_{k}} =& x_{0} + \int_{0}^{t\wedge\tau_{k}} (a-bX_{s}) X_{s} \,ds + \int_{0}^{t\wedge\tau_{k}} \int_{0}^{\infty}r z \nu(dz)\,ds \\ &{}+ \int_{0}^{t\wedge\tau_{k}} \sigma X_{s} \,dW_{s} + \int_{0}^{t\wedge \tau_{k}} \int_{0}^{\infty}r z \tilde{N}(ds,dz). \end{aligned}$$

Taking the expectation, we get

$$\begin{aligned} {\mathbb {E}}(X_{t\wedge\tau_{k}}) =& x_{0} + \int_{0}^{t\wedge\tau_{k}} \int_{0}^{\infty}r z \nu(dz)\,ds + {\mathbb {E}}\biggl( \int_{0}^{t\wedge\tau_{k}} (a-bX_{s}) X_{s} \,ds \biggr) \\ \leq& x_{0} + r T \int_{0}^{\infty}z \nu(dz) + a \int_{0}^{t} {\mathbb {E}}(X_{s\wedge\tau_{k}} ) \,ds. \end{aligned}$$

By using Gronwall’s inequality

$$ {\mathbb {E}}(X_{T\wedge\tau_{k}}) \leq\biggl( x_{0} + r T \int_{0}^{\infty}z \nu (dz) \biggr)e^{aT} \leq C. $$

On the other hand,

$$ {\mathbb {P}}(\tau_{k} \leq T) k = {\mathbb {E}}(X_{T\wedge\tau_{k}} 1_{\{ \tau_{k} \leq T\}}) \leq{\mathbb {E}}(X_{T\wedge\tau_{k}}), $$

therefore

$$ {\mathbb {P}}(\tau_{k} \leq T) k \leq C. $$

Putting \(k \to\infty\) yields

$$ {\mathbb {P}}(\tau_{\infty}\leq T ) =0. $$

Since T is arbitrary, we get

$$ {\mathbb {P}}(\tau_{\infty}= \infty)=1. $$

Step 2: We show that the solution is positive. Let \(\tilde{\tau}_{0} := \inf\{t \in[0, \infty): X_{t}=0\}\). Let \(\tilde{k}_{0}\) be a large enough number such that \(x_{0}> 1/\tilde{k}_{0}\). For each integer \(k > \tilde{k}_{0}\), define the stopping time

$$ \tilde{\tau}_{k} := \inf\bigl\{ t \in[0, \infty): X_{t} < 1/k \bigr\} . $$

Similarly, if we can prove \(\tilde{\tau}_{\infty}:=\lim_{k \to\infty } \tilde{\tau}_{k} = \infty\) a.s., then we get \(\tilde{\tau}_{0} = \infty\) a.s., which implies the positive solution. Let \(g(x) = x - \log x\), for any \(0 \leq t \leq T\), by Itô’s formula

$$ g(X_{t\wedge\tilde{\tau}_{k}}) = g(x_{0}) + \int_{0}^{t\wedge\tilde {\tau}_{k}} \mathcal{A} g(X_{s}) \,ds + M_{t\wedge\tilde{\tau}_{k}}, $$

where

$$\begin{aligned} \mathcal{A}g(x) =& -b x^{2} + (a+b)x+ a+ \sigma^{2}/2 \\ &{}+ r \int_{0}^{\infty}z \nu(dz) + \int_{0}^{\infty}\bigl[\log x - \log(x+ rz) {\bigr]} \nu(dz) \end{aligned}$$

and \(M_{t\wedge\tilde{\tau}_{k}}\) is a local martingale defined by

$$\begin{aligned} M_{t\wedge\tilde{\tau}_{k}} =& \int_{0}^{t\wedge\tilde{\tau}_{k}} \int_{0}^{\infty}\bigl[ r z + \log X_{s-} - \log(X_{s-}+ rz) {\bigr]}\tilde{N}(ds,dz) \\ &{}+ \int_{0}^{t\wedge\tilde{\tau}_{k}} \sigma(X_{s} -1) \,dW_{s}. \end{aligned}$$

Note that \((M_{t\wedge\tilde{\tau}_{k}})_{t\in{\mathbb {R}}_{+}}\) is a true martingale and \(\int_{0}^{\infty}[\log x - \log(x+ rz) {]} \nu (dz) \leq0\). Therefore, there exists a positive number C such that \(\mathcal{A}f(x) \leq C\) for all \(x\in{\mathbb {R}}_{+}\), it follows

$$ {\mathbb {E}}\bigl(g(X_{T\wedge\tilde{\tau}_{k}})\bigr) \leq g(x_{0}) + C. $$

On the other hand,

$$ {\mathbb {E}}\bigl(g(X_{T\wedge\tilde{\tau}_{k}})\bigr) \geq{\mathbb {P}}(\tilde { \tau}_{k} \leq T) (1/k+\log k). $$

By taking \(k \to\infty\) and \(T \to\infty\), we get

$$ {\mathbb {P}}(\tilde{\tau}_{\infty}= \infty) =1. $$

The proof is complete. □

Remark 2.2

From the study of real ecosystems (see, e.g., [19]), it is known that the effects of random fluctuations are proportional to the population size in the presence of multiplicative noise, while they are not proportional to the population size any more in the presence of additive noise. For the latter case, strongly negative values of the noise can cause negative values of the population size. For our equation, there are in fact two types of noise: one is the multiplicative Brownian noise and the other one is additive positive jump noise. Due to the positivity of the additive noise, our equation has a unique positive solution. Therefore, the phenomena stated above are not in contradiction with our result.

In the following, our aim is to show that under assumption (A1) equation (2.3) has a unique stationary distribution. We need the following lemmas.

Lemma 2.3

Let assumption (A1) hold. Then there exists a constant \(C>0\) such that

$$ \sup_{t\in{\mathbb {R}}_{+}} {\mathbb {E}}(X_{t}) \leq C. $$

Proof

Applying Itô’s formula, we have

$$ {\mathbb {E}}\bigl(e^{t} X_{t}\bigr) = x_{0} + { \mathbb {E}} \int_{0}^{t} e^{s}\biggl(X_{s} + a X_{s} - b X_{s}^{2} + r \int_{0}^{\infty}z \nu(dz)\biggr) \,ds. $$

It is easy to see that \((a+1)x-bx^{2}+ r \int_{0}^{\infty}z \nu(dz)\) has an upper bound for all \(x \in{\mathbb {R}}_{+}\). Hence

$$ e^{t} {\mathbb {E}}( X_{t}) \leq x_{0} + C \bigl(e^{t}-1\bigr), $$

which implies the desired result. □

Lemma 2.4

Under assumption (A1), equation (2.3) has the Feller property.

Proof

The proof is essentially the same as the proof of Lemma 3.2 of [7], so we omit the proof. □

Based on the standard argument, we can obtain the following result from Lemma 2.3 and Lemma 2.4 (see, e.g., [7, 20]).

Proposition 2.5

Under assumption (A1), equation (2.3) has a unique stationary distribution.

Proposition 2.6

Under assumption (A1), equation (2.3) is exponentially ergodic.

Proof

We define the Lyapunov function \(V(x)=x\). Then

$$ LV(x)= (a-bx)x + r \int_{0}^{\infty}z \nu(dz), $$

where L is the infinitesimal generator of the solution \((X_{t})_{t \in {\mathbb {R}}_{+}}\). It is easy to see, for all \(x \in{\mathbb {R}}_{++}\), there exist two positive constants γ and K such that

$$ LV(x) + \gamma V(x) = (a + \gamma)x -bx^{2} + r \int_{0}^{\infty}z \nu (dz) \leq K, $$

which satisfies the condition for exponential ergodicity in [21]. Then our desired result follows from Theorem 6.1 of [21]. □

Remark 2.7

The results above show that stationary property and exponential ergodicity do not depend on the weight of the noise. These are different from the conditions needed in [5, 6, 8], in which the results only hold under relatively small noise.

Here is a result we will use later to prove the existence of the likelihood function.

Proposition 2.8

Suppose that assumption (A1) holds, then

$$ \int_{0}^{t} X_{s}^{2} \,ds < \infty\quad\textit{a.s.} $$

for \(t \in{\mathbb {R}}_{+}\).

Proof

From equation (2.3), for \(t\in{\mathbb {R}}_{+}\), we have

$$\begin{aligned} X_{t} + \frac{b}{2} \int_{0}^{t} X_{s}^{2} \,ds =& x_{0} + \int_{0}^{t} \biggl(aX_{s}- \frac{b}{2}X_{s}^{2} + r \int_{0}^{\infty}z \nu(dz)\biggr) \,ds \\ &{}+ \int_{0}^{t} \sigma X_{s} \,dW_{s} + \int_{0}^{t} \int_{0}^{\infty}r z \tilde{N}(ds,dz). \end{aligned}$$

By taking the expectation and noting that function \(ax - b/2 x^{2} + r \int_{0}^{\infty}z \nu(dz) \) has an upper bound, we obtain

$$\begin{aligned} {\mathbb {E}} \int_{0}^{t} X_{s}^{2} \,ds \leq & x_{0} + C t, \end{aligned}$$

which implies our result. □

Existence and uniqueness of MLE

In this section, we shall deduce our maximum likelihood estimation by using the semimartingale theory.

Let \({\mathbb {D}}:=D({\mathbb {R}}_{+},{\mathbb {R}})\) be the space of càdlàg functions (right-continuous with left limits) from \({\mathbb {R}}_{+}\) to \({\mathbb {R}}\). We denote by \((\mathcal{B}_{t}({\mathbb {D}}))_{t\geq0}\) the canonical filtration on \({\mathbb {D}}\). That is, for the canonical process \(\eta=(\eta _{t})_{t\geq0}\) defined by

$$ \eta_{t}: {\mathbb {D}} \ni\omega \to\omega(t) \in{\mathbb {R}}. $$

Then

$$ \mathcal{B}_{t}({\mathbb {D}}):= \bigcap_{\varepsilon>0} \sigma(\eta _{s}; s \leq t+\varepsilon). $$

Let \(\mathcal{B}({\mathbb {D}})\) be the smallest σ-algebra containing \((\mathcal{B}_{t}({\mathbb {D}}))_{t\geq0}\). We shall call \(({\mathbb {D}},\mathcal{B}({\mathbb {D}}), [4](\mathcal{B}_{t}({\mathbb {D}}))_{t\geq0})\) the canonical space.

In this section, we denote by \(X^{\theta}=(X^{\theta}_{t})_{t\in {\mathbb {R}}_{+}}\) the unique strong solution of equation (2.3) with parameter \(\theta=(a,b)'\). Let \({\mathbb {P}}^{\theta}\) be the probability measures induced by \(X^{\theta}\) on the canonical space and \({\mathbb {P}}^{\theta}_{t}\) be the restriction probability measure of \({\mathbb {P}}^{\theta}\) on σ-algebra \(\mathcal {B}_{t}({\mathbb {D}})\). We can write equation (2.3) in the form

$$\begin{aligned} X_{t} &= x_{0} + \int_{0}^{t} (a-bX_{t}) X_{t} \,dt + \int_{0}^{t} \int_{0}^{\infty}r z 1_{\{r z \leq1\}} \nu(dz)\,ds \\ &\quad {}+ \int_{0}^{t} \sigma X_{t} \,dW_{t} + \int_{0}^{t} \int_{0}^{\infty}r z 1_{\{ r z \leq1\}} \tilde{N}(ds,dz) \\ &\quad {}+ \int_{0}^{t} \int_{0}^{\infty}r z 1_{\{r z > 1\}} N(ds,dz). \end{aligned} $$

This form is the so-called Grigelionis decomposition for a semimartingale (see, e.g., [22] Theorem 2.1.2 and [23]). It follows that, under probability measure \({\mathbb {P}}^{\theta}\), \((\eta_{t})_{t\in{\mathbb {R}}_{+}}\) is a semimartingale with semimartingale characteristics \((B^{\theta},C^{\theta},\mu ^{\theta})\), where

$$\begin{aligned}& B^{\theta}_{t}= \int_{0}^{t} \biggl[(a-b\eta_{s}) \eta_{s} + \int_{0}^{\infty}r z 1_{\{rz \leq1\}} \nu(dz) {\biggr]} \,ds, \end{aligned}$$
(3.1)
$$\begin{aligned}& C^{\theta}_{t}= \sigma^{2} \int_{0}^{t} \eta^{2}_{s} \,ds \end{aligned}$$
(3.2)

and

$$ \mu^{\theta}(dt,dz)= K(\eta_{t},dz) \,dt, $$

where K is a Borel kernel from \({\mathbb {R}}_{++}\) to \({\mathbb {R}}_{++}\) given by

$$ K(x,A)= \int_{0}^{\infty}1_{A} (rxz) \nu(dz) $$

for \(t\in{\mathbb {R}}_{+}\) and \(A \in\mathcal{B}({\mathbb {R}}_{++})\).

In order to get the likelihood ratio process, we present the following result from [23], see also [15, 24].

Lemma 3.1

Let Ψ be a parametric space. For ψ, \(\tilde{\psi} \in \Psi\), let \({\mathbb {P}}^{\psi}\) and \({\mathbb {P}}^{\tilde{\psi}}\) be two probability measures on the canonical space \(({\mathbb {D}},\mathcal {B}({\mathbb {D}}),(\mathcal{B}_{t}({\mathbb {D}}))_{t\geq0})\). We assume that, under these two probability measures, the canonical process \((\eta_{t})_{t\in{\mathbb {R}}_{+}}\) is a semimartingale with characteristics \((B^{\psi},C^{\psi},\mu^{\psi})\) and \((B^{\tilde{\psi}},C^{\tilde{\psi}},\mu^{\tilde{\psi}})\), respectively. We further assume that, for each \(\phi\in\{\psi,\tilde{\psi}\}\), there exists a nondecreasing, continuous, and adapted process \((F_{t}^{\phi})_{t\in{\mathbb {R}}_{+}}\) with \(F_{0}^{\phi}=0\) and a predictable process \((c^{\phi}_{t})_{t \in{\mathbb {R}}_{+}}\) such that

$$ C^{\phi}_{t} = \int_{0}^{t} c^{\phi}_{s} \,dF_{s}^{\phi}\quad{\mathbb {P}}^{\phi}\textit{-a.s. for every }t\in{\mathbb {R}}_{+}. $$

This can be guaranteed by the condition

(B1):

\({\mathbb {P}}^{\phi}( \mu^{\phi}(\{t\}\times{\mathbb {R}})=0 )=1\) for each \(\phi\in\{\psi,\tilde{\psi}\}\).

Let \(\mathcal{P}\) be the predictable σ-algebra on \({\mathbb {D}}\times{\mathbb {R}}_{+}\). We also assume that there exist a \(\mathcal{P}\otimes\mathcal{B}({\mathbb {R}})\)-measurable function \(V^{\psi,\tilde{\psi}}: {\mathbb {D}}\times{\mathbb {R}}_{+}\times {\mathbb {R}}\to{\mathbb {R}}_{++}\) and a predictable \({\mathbb {R}}\)-valued process \(\beta^{\psi,\tilde{\psi}}\) satisfying
(B2):

\(\mu^{\psi}(dt,dz) = V^{\psi,\tilde{\psi}}(t,z) \mu^{\tilde{\psi }}(dt,dz)\),

(B3):

\(\int_{0}^{t}\int_{{\mathbb {R}}} ( \sqrt{V^{\psi,\tilde{\psi}}(s,z) -1})^{2} \mu^{\tilde{\psi}}(ds,dz)< \infty\),

(B4):

\(B_{t}^{\psi}= B_{t}^{\tilde{\psi}} + \int_{0}^{t} c^{\psi}_{s} \beta_{s}^{\psi ,\tilde{\psi}} \,dF^{\psi}_{s} +\int_{0}^{t}\int_{|z|\leq1} z ( V^{\psi,\tilde{\psi}}(s,z) -1) \mu ^{\tilde{\psi}}(ds,dz)\),

(B5):

\(\int_{0}^{t} c^{\psi}_{s} (\beta_{s}^{\psi,\tilde{\psi}})^{2} \,dF^{\psi}_{s} < \infty\)

\({\mathbb {P}}^{\psi}\)-a.s. for every \(t \in{\mathbb {R}}_{+}\). Moreover, we assume that, for each \(\phi\in\{\psi,\tilde{\psi}\}\), local uniqueness holds for the martingale problem on the canonical space corresponding to the triple \((B^{\phi},C^{\phi},\mu^{\phi})\) with the given initial value \(x_{0}\), and \({\mathbb {P}}^{\phi}\) is the unique solution. Then, for any \(T \in{\mathbb {R}}_{+}\), \({\mathbb {P}}^{\psi}_{T}\) is absolutely continuous with respect to \({\mathbb {P}}^{\tilde{\psi}}_{T}\). The corresponding Randon–Nikodym derivative is

$$\begin{aligned} \frac{d{\mathbb {P}}_{T}^{\theta}}{d{\mathbb {P}}_{T}^{\tilde{\theta }}}(\eta) =& \int_{0}^{T} \beta_{s}^{\psi,\tilde{\psi}} \,d \eta_{s}^{\mathrm{cont}} -\frac{1}{2} \int_{0}^{T} c^{\psi}_{s} \bigl( \beta_{s}^{\psi,\tilde{\psi}}\bigr)^{2} \,dF^{\psi}_{s} \\ &{}- \int_{0}^{T} \int_{{\mathbb {R}}} \bigl( V^{\psi,\tilde{\psi}}(s,z) -1 \bigr) \mu^{\tilde{\psi}}(ds,dz) \\ &{}+ \int_{0}^{T} \int_{{\mathbb {R}}} \log\bigl( V^{\psi,\tilde{\psi}}(s,z) \bigr) N^{\eta}(ds,dz), \end{aligned}$$

where \((\eta_{t}^{\mathrm{cont}})_{t\in{\mathbb {R}}_{+}}\) is a continuous martingale part of \((\eta_{t})_{t\in{\mathbb {R}}_{+}}\) under \({\mathbb {P}}^{\tilde{\psi }}\) and \(N^{\eta}\) is the random jump measure of process \((\eta_{t})_{t\in{\mathbb {R}}_{+}}\) defined as

$$ N^{\eta}(\omega; dt,dz):=\sum_{u} 1_{\{\Delta\eta_{u}(\omega) \neq0\}} \delta_{(u,\Delta\eta_{u}(\omega))}(dt,dz), $$

where \(\delta_{p}\) is the Dirac measure at p.

In the following, let \(\theta=(a,b)'\), \(\tilde{\theta}=(\tilde{a},\tilde{b})' \in{\mathbb {R}}_{++}^{2}\).

Proposition 3.2

Let assumption (A1) hold, then for all \(T \in{\mathbb {R}}_{++}\), we have

$$ {\mathbb {P}}_{T}^{\theta} \thicksim{\mathbb {P}}_{T}^{\tilde{\theta}}. $$

Moreover, under probability measure \({\mathbb {P}}^{\tilde{\theta}}\), we have

$$\begin{aligned} \log\biggl( \frac{d{\mathbb {P}}_{T}^{\theta}}{d{\mathbb {P}}_{T}^{\tilde{\theta }}}(\eta)\biggr) =& \frac{1}{\sigma^{2}} \int_{0}^{T} \biggl((a-\tilde{a}) \frac{1}{\eta _{s}}-(b-\tilde{b})\biggr) \,d\eta_{s}^{\mathrm{cont}} \\ &{}-\frac{1}{2\sigma^{2}} \int_{0}^{T} \bigl((a-\tilde{a})-(b-\tilde{b})\eta _{s}\bigr)^{2} \,ds, \end{aligned}$$

where \(\eta^{\mathrm{cont}}\) denotes the continuous martingale part of η under probability measure \({\mathbb {P}}^{\tilde{\theta}}\).

Proof

The main task is to check the conditions in Lemma 3.1 and then to apply Lemma 3.1 to get our result. First, it is clear that \(\mu^{\theta}\) and \(\mu^{\tilde{\theta}}\) do not depend on the unknown parameter. Hence

$$ {\mathbb {P}}^{\theta}\bigl(\mu^{\theta} \bigl(\{t\}\times{\mathbb {R}} \bigr)=0 \bigr)= {\mathbb {P}}^{\tilde{\theta}} \bigl(\mu^{\tilde{\theta}} \bigl(\{t\} \times {\mathbb {R}}\bigr)=0 \bigr)= {\mathbb {P}}^{\theta}\bigl( 0 \cdot\nu({ \mathbb {R}}_{++}) =0 \bigr)=1 $$

and \(V^{\theta, \tilde{\theta}} \equiv1\). Therefore, conditions (B1)–(B3) readily hold. From (3.1) and (3.2), we see that, for \(t \in{\mathbb {R}}_{+}\), \(c_{t}^{\theta}= \sigma^{2} \eta_{t}^{2} \) with \(F_{t}^{\theta}=t\) and

$$\begin{aligned} B_{t}^{\psi}- B_{t}^{\tilde{\psi}} &= \int_{0}^{t} \bigl((a-b\eta_{s}) \eta_{s} - (\tilde{a}-\tilde{b}\eta_{s}) \eta_{s} \bigr) \,ds \\ &= \int_{0}^{t} c_{s}^{\theta}\frac{1}{\sigma^{2}}\biggl(\frac{a-\tilde{a}}{\eta _{s}}- (b-\tilde{b})\biggr) \,ds. \end{aligned} $$

By choosing \(\beta_{t}^{\theta,\tilde{\theta}} = \frac{1}{\sigma ^{2}}(\frac{a-\tilde{a}}{\eta_{t}}- (b-\tilde{b})) \) for \(t \in {\mathbb {R}}_{+}\), we get (B4). Now we check (B5), that is, for \(t\in {\mathbb {R}}_{+}\)

$$ {\mathbb {P}}^{\theta}\biggl( \int_{0}^{t} c_{s}^{\theta}\bigl( \beta_{t}^{\theta,\tilde {\theta}}\bigr)^{2} \,ds < \infty\biggr)=1. $$

Note that

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}^{\theta}} \int_{0}^{t} c_{s}^{\theta}\bigl( \beta _{t}^{\theta,\tilde{\theta}}\bigr)^{2} \,ds =& {\mathbb {E}}_{{\mathbb {P}}^{\theta}} \int_{0}^{t} \frac{1}{\sigma^{2}}\bigl(a-\tilde{a}- (b-\tilde {b})\eta_{s}\bigr)^{2} \,ds \\ \leq& C\bigl(a,b,\tilde{a},\tilde{b},\sigma^{2}\bigr) {\mathbb {E}}_{{\mathbb {P}}^{\theta}} \int_{0}^{t} \eta_{s}^{2} \,ds \\ = & C\bigl(a,b,\tilde{a},\tilde{b},\sigma^{2}\bigr) {\mathbb {E}}_{{\mathbb {P}}} \int_{0}^{t} X_{s}^{2} \,ds. \end{aligned}$$

According to Proposition 2.8, we see that

$$ {\mathbb {E}}_{{\mathbb {P}}^{\theta}} \int_{0}^{t} c_{s}^{\theta}\bigl( \beta _{t}^{\theta,\tilde{\theta}}\bigr)^{2} \,ds < \infty $$

for \(t\in{\mathbb {R}}_{+}\), which implies that (B5) holds. Finally, the local unique property of the corresponding martingale problem comes from the fact that our equation has a unique strong solution. Therefore, all the conditions of Lemma 3.1 are satisfied. For \(T \in{\mathbb {R}}_{++}\), by exchanging the roles of θ and θ̃, we obtain

$$ {\mathbb {P}}_{T}^{\theta} \thicksim{\mathbb {P}}_{T}^{\tilde{\theta}}. $$

The proof is complete. □

In the following, our aim is to estimate the parameter based on the continuous time observations of \(X^{T}:=(X_{t})_{0 \leq t \leq T}\). Now, we set \({\mathbb {P}}^{\tilde{\theta}}\) as a fixed reference measure. Since

$$\begin{aligned} d\bigl(X^{\tilde{\theta}}\bigr)^{\mathrm{cont}}_{s} =& \sigma X^{\tilde{\theta}}_{s} \,dW_{s} \\ =& dX^{\tilde{\theta}}_{s} - \bigl(\tilde{a}-\tilde{b}X^{\tilde{\theta }}_{s} \bigr)X^{\tilde{\theta}}_{s} \,ds -r \,d J_{s}, \end{aligned}$$

then under \({\mathbb {P}}\) we have

$$\begin{aligned} \log\biggl( \frac{d{\mathbb {P}}_{T}^{\theta}}{d{\mathbb {P}}_{T}^{\tilde{\theta }}}\bigl(X^{\tilde{\theta}}\bigr)\biggr) &= \frac{1}{\sigma^{2}} \int_{0}^{T} \biggl((a-\tilde{a})\frac{1}{X^{\tilde {\theta}}_{s}} -(b-\tilde{b})\biggr) \bigl[ dX^{\tilde{\theta}}_{s} - \bigl(\tilde{a}- \tilde{b}X^{\tilde{\theta }}_{s}\bigr)X^{\tilde{\theta}}_{s} \,ds -r \,d J_{s}{\bigr]} \\ &\quad {}-\frac{1}{2\sigma^{2}} \int_{0}^{T} \bigl((a-\tilde{a})-(b-\tilde {b})X^{\tilde{\theta}}_{s} \bigr)^{2} \,ds \\ &= \frac{1}{\sigma^{2}} \biggl[ (a-\tilde{a}) \int_{0}^{T} \frac {1}{X^{\tilde{\theta}}_{s}} \bigl(dX^{\tilde{\theta}}_{s} -r \,dJ_{s} \bigr) - (b-\tilde{b}) \int_{0}^{T} \bigl(dX^{\tilde{\theta}}_{s} -r \,dJ_{s} \bigr) \\ &\quad {}- \frac{1}{2}\bigl(a^{2}-\tilde{a}^{2}\bigr) T - \frac{1}{2}\bigl(b^{2}-\tilde{b}^{2}\bigr) \int_{0}^{T} {X^{\tilde{\theta}}_{s}}^{2} \,ds + (ab-\tilde{a}\tilde{b}) \int_{0}^{T} X^{\tilde{\theta}}_{s} \,ds \biggr]. \end{aligned} $$

Next, we can define the log-likelihood function with respect to the dominated measure \({\mathbb {P}}^{\tilde{\theta}}\) as

$$\begin{aligned} l_{T}\bigl(\theta;X^{T}\bigr) =& \sigma^{2} \log\frac{d{\mathbb {P}}_{t}^{\theta} }{d{\mathbb {P}}_{t}^{\tilde{\theta}}}\bigl(X^{T}\bigr). \end{aligned}$$

Then the maximum likelihood estimator (MLE) \(\hat{\theta}_{T}\) of the unknown parameter θ is defined as

$$ \hat{\theta}_{T} := \mathop{\arg\max}_{\theta\in{\mathbb {R}}^{2}_{++}} l_{T} \bigl(\theta;X^{T}\bigr). $$

Proposition 3.3

If assumption (A1) holds, then for every \(T \in{\mathbb {R}}_{++}\), there exists a unique MLE \(\hat{\theta}_{T}\) with the form

θ ˆ T = ( 0 T X s 2 d s 0 T 1 X s ( d X s r d J s ) 0 T X s d s 0 T ( d X s r d J s ) T 0 T X s 2 d s ( 0 T X s d s ) 2 0 T X s d s 0 T 1 X s ( d X s r d J s ) T 0 T ( d X s r d J s ) T 0 T X s 2 d s ( 0 T X s d s ) 2 )
(3.3)

almost surely.

Proof

By Hölder’s inequality, we have

$$ \biggl( \int_{0}^{T} X_{s} \,ds \biggr)^{2} \leq\biggl( \biggl( \int_{0}^{T} ds\biggr)^{1/2} \biggl( \int_{0}^{T} X_{s}^{2} \,ds \biggr)^{1/2}\biggr)^{2} = T \int_{0}^{T} X_{s}^{2} \,ds $$

and

$$ {\mathbb {P}}\biggl( T \int_{0}^{T} X_{s}^{2} \,ds- \biggl( \int_{0}^{T} X_{s} \,ds \biggr)^{2}=0 \biggr) = {\mathbb {P}}\bigl( X_{s} \equiv k, s \in[0,T],\text{ for some number }k \bigr). $$

From equation (1.1), we see that the constant solution is impossible. Hence,

$$ T \int_{0}^{T} X_{s}^{2} \,ds- \biggl( \int_{0}^{T} X_{s} \,ds \biggr)^{2} >0 \quad\text{a.s.} $$

It follows that (3.3) is well defined almost surely. Note that

$$ J_{t} = \sum_{0 \leq s \leq t} \Delta J_{s} = \sum_{0 \leq s \leq t} \Delta X_{s}/r. $$

Hence, for \(t \in[0,T]\), \(J_{t}\) is a measurable function of \(X^{T}\), which implies that (3.3) is a true statistic. Next, we have

$$\begin{aligned}& \frac{\partial}{\partial a}l_{T}\bigl(\theta;X^{T}\bigr) = \int_{0}^{T} \frac {1}{X_{s}} (dX_{s} - r \,dJ_{s} ) -a T + b \int_{0}^{T} X_{s} \,ds, \\& \frac{\partial}{\partial b}l_{T}\bigl(\theta;X^{T}\bigr) = - \int_{0}^{T} (dX_{s} -r \,dJ_{s} ) + a \int_{0}^{T} X_{s} \,ds -b \int_{0}^{T} X_{s}^{2} \,ds. \end{aligned}$$

By direct calculation, we can get our desired result. □

Asymptotic properties

In order to get the asymptotic properties of our estimator, we need the following result.

Proposition 4.1

Let assumptions (A1)–(A2) hold. Then, for any \(x_{0} \in{\mathbb {R}}_{++}\), there exists a positive constant C such that

$$ \limsup_{t\in{\mathbb {R}}_{+}} \frac{1}{t} \int_{0}^{t} X_{s}^{2} \,ds \leq C \quad\textit{a.s.} $$

Proof

We follow the approach used in Lemma 4.1 of [4]. By the exponential martingale inequality, we get

$$ {\mathbb {P}}\biggl(\sup_{0 \leq t \leq k}\biggl( \int_{0}^{t}\sigma X_{s} \,dW_{s} - \frac{\alpha}{2} \int_{0}^{t} \sigma^{2} X^{2}_{s} \,ds\biggr)> \frac{2}{\alpha} \log k\biggr) \leq \frac{1}{k^{2}}, $$

where we choose \(\alpha= b/(2\sigma^{2})\). The well-known Borel–Cantelli lemma implies that for almost all \(\omega\in\Omega\), there is a random integer \(k_{0}=k_{0}(\omega)\) such that

$$ \int_{0}^{t}\sigma X_{s} \,dW_{s} \leq\frac{2}{\alpha} \log k + \frac {\alpha}{2} \int_{0}^{t} \sigma^{2} X^{2}_{s} \,ds $$

for all \(t \in[0,k]\), \(k\geq k_{0}\), almost surely. Substituting this to our equation (2.3), we have

$$\begin{aligned} X_{t} \leq& x_{0} + \frac{2}{\alpha} \log k + \int_{0}^{t} \biggl(a X_{s} - \frac {3}{4}bX_{s}^{2}+ r \int_{0}^{\infty}z \nu(dz)\biggr) \,ds \\ &{}+ \int_{0}^{t} \int_{0}^{\infty}r z \tilde{N}(ds,dz) \end{aligned}$$

for all \(t \in[0,k]\), \(k\geq k_{0}\), almost surely. Hence

$$\begin{aligned} \frac{b}{2} \int_{0}^{t} X_{s}^{2} \,ds \leq& x_{0} + \frac{2}{\alpha} \log k + \int_{0}^{t} \biggl(a X_{s} - \frac{b}{4}X_{s}^{2} + r \int_{0}^{\infty}z \nu (dz)\biggr) \,ds \\ &{}+ \int_{0}^{t} \int_{0}^{\infty}r z \tilde{N}(ds,dz) \\ \leq& x_{0} + \frac{2}{\alpha} \log k + C_{1} t+ \int_{0}^{t} \int _{0}^{\infty}r z \tilde{N}(ds,dz) \end{aligned}$$

for all \(t \in[0,k]\), \(k\geq k_{0}\), almost surely. Now, for almost all \(\omega\in\Omega\), let \(k \geq k_{0}\) and \(k-1 \leq t \leq k\), then

$$\begin{aligned} \frac{1}{t} \int_{0}^{t} X_{s}^{2} \,ds \leq& \frac{2}{(k-1)b}\biggl( x_{0} + \frac{2}{\alpha} \log k + C_{1} k + \int_{0}^{k} \int_{0}^{\infty}r z \tilde{N}(ds,dz) \biggr). \end{aligned}$$

Letting \(t \to\infty\) and hence \(k \to\infty\), we obtain

$$\begin{aligned} \limsup_{t\to\infty}\frac{1}{t} \int_{0}^{t} X_{s}^{2} \,ds \leq& \frac {2}{b}\biggl( x_{0} +C_{2} + \limsup _{k \to\infty}\frac{1}{k} \int_{0}^{k} \int _{0}^{\infty}r z \tilde{N}(ds,dz) \biggr). \end{aligned}$$
(4.1)

Under assumption (A2), note that \((\int_{0}^{t}\int_{0}^{\infty}r z \tilde {N}(ds,dz))_{t\in{\mathbb {R}}_{+}}\) is a local martingale with Meyer’s angle bracket process \((\int_{0}^{t}\int_{0}^{\infty}r z^{2}\nu (dz)\,ds)_{t\in{\mathbb {R}}_{+}}\) and

$$ \lim_{t\to\infty} \int_{0}^{t} \frac{\int_{0}^{\infty}r z^{2}\nu (dz)}{(1+s)^{2}} \,ds < \infty. $$

By using the strong law of large numbers for local martingales (Lemma A.1), we get

$$ \lim_{t\to\infty} \frac{1}{t} \int_{0}^{t} \int_{0}^{\infty}r z \tilde {N}(ds,dz) = 0 $$

almost surely. Hence, there exists a constant \(C_{2}\) such that

$$ \limsup_{k \to\infty}\frac{1}{k} \int_{0}^{k} \int_{0}^{\infty}r z \tilde{N}(ds,dz) \leq C_{2} $$

almost surely. Combining this with (4.1), we complete the proof. □

Corollary 4.2

Suppose that assumptions (A1)–(A2) hold. The invariant measure π has a finite second moment, moreover

$$ \lim_{t\in{\mathbb {R}}_{+}} \frac{1}{t} \int_{0}^{t} X_{s} \,ds = \int _{0}^{\infty}y \pi(dy) \quad\textit{a.s.} $$

and

$$ \lim_{t\in{\mathbb {R}}_{+}} \frac{1}{t} \int_{0}^{t} X_{s}^{2} \,ds = \int _{0}^{\infty}y^{2} \pi(dy) \quad \textit{a.s.} $$

Proof

The proof of the first result is essentially the same as the proof of Theorem 4.2 in [4], and the second is the same as the proof in [20]. So, we omit them. □

In the following, we present the weak and strong consistency of our estimator.

Theorem 4.3

Under assumption (A1), the estimator \(\hat{\theta}_{T}=(\hat{a}_{T}, \hat{b}_{T})'\) of \(\theta=(a,b)'\) admits the weak consistency, i.e.,

$$ \hat{\theta}_{T} \xrightarrow{\mathbb {P}} \theta\quad\textit{as }T \to \infty, $$

where \(\xrightarrow{{\mathbb {P}}}\) denotes the convergence in probability. Under assumptions (A1)–(A2), the estimator \(\hat{\theta}_{T}=(\hat {a}_{T}, \hat{b}_{T})'\) of \(\theta=(a,b)'\) admits the strong consistency, i.e.,

$$ \hat{\theta}_{T} \to\theta\quad\textit{a.s. as }T \to\infty. $$

Proof

We have

$$\begin{aligned}& \hat{a}_{T} = a + \frac{\int_{0}^{T} X^{2}_{s} \,ds \int_{0}^{T} \,dW_{s} -\int_{0}^{T} X_{s} \,ds \int_{0}^{T} X_{s} \,dW_{s}}{ T \int_{0}^{T} X_{s}^{2} \,ds-(\int_{0}^{T} X_{s} \,ds)^{2}} , \\& \hat{b} _{T} = b + \frac{\int_{0}^{T} X_{s} \,ds \int_{0}^{T} \,dW_{s} -T\int_{0}^{T} X_{s} \,dW_{s}}{ T \int_{0}^{T} X_{s}^{2} \,ds-(\int_{0}^{T} X_{s} \,ds)^{2}} . \end{aligned}$$

Note that

$$\begin{aligned} \hat{a}_{T} - a =& \frac{\int_{0}^{T} X^{2}_{s} \,ds \int_{0}^{T} \,dW_{s} -\int_{0}^{T} X_{s} \,ds \int_{0}^{T} X_{s} \,dW_{s}}{ T \int_{0}^{T} X_{s}^{2} \,ds-(\int_{0}^{T} X_{s} \,ds)^{2}} \\ =& \frac{\int_{0}^{T} X^{2}_{s} \,ds \int_{0}^{T} \,dW_{s}}{ T \int_{0}^{T} X_{s}^{2} \,ds-(\int_{0}^{T} X_{s} \,ds)^{2}} - \frac{\int_{0}^{T} X_{s} \,ds \int_{0}^{T} X_{s} \,dW_{s}}{ T \int_{0}^{T} X_{s}^{2} \,ds-(\int_{0}^{T} X_{s} \,ds)^{2}} \\ :=& I_{1} -I_{2}. \end{aligned}$$

Case 1: Under assumption (A1), for \(I_{1}\), we have

$$ \vert I_{1} \vert \leq \biggl\vert \int_{0}^{T} \,dW_{s}/T \biggr\vert . $$

According to the strong law of large numbers for continuous local martingales (Lemma A.2), we have

$$ \lim_{T \to\infty} \int_{0}^{T} \,dW_{s}/T =0 \quad \text{a.s.} $$

Then we obtain \(\lim_{T \to\infty} I_{1}=0\), a.s. For \(I_{2}\), we have

$$ \vert I_{2} \vert \leq \biggl\vert \frac{\int_{0}^{T} X_{s} \,ds}{ T} \frac{\int_{0}^{T} X_{s} \,dW_{s}}{\int_{0}^{T} X^{2}_{s} \,ds} \biggr\vert . $$

Note \((\frac{\int_{0}^{T} X_{s} \,ds}{ T})_{T>0}\) is tight. Indeed, by Lemma 2.3, for \(M>0\), we have

$$ {\mathbb {P}}\biggl(\frac{\int_{0}^{T} X_{s} \,ds}{ T} > M\biggr) \leq \frac{\int_{0}^{T} {\mathbb {E}}X_{s} \,ds}{ M T} \leq\frac{C}{ M} . $$
(4.2)

On the other hand, by Proposition 2.5 and Proposition 2.6, we have

$$ \lim_{T \to\infty} \frac{\int_{0}^{T} X^{2}_{s} \,ds}{T} = \int_{0}^{\infty}y^{2} \pi(dy) >0, $$

where π is the unique invariant measure. It follows that

$$ \lim_{T \to\infty} \int_{0}^{T} X^{2}_{s} \,ds = \infty\quad\text{a.s.} $$

By Proposition 2.8, we also have

$$ \int_{0}^{T} X^{2}_{s} \,ds < \infty\quad\text{a.s.} $$

for each \(T >0\). Then, again by Lemma A.2, we get

$$ \lim_{T \to\infty}\frac{\int_{0}^{T} X_{s} \,dW_{s}}{\int_{0}^{T} X^{2}_{s} \,ds} = 0 \quad \text{a.s.} $$
(4.3)

From (4.2) and (4.3), we get \(\lim_{T \to\infty} I_{2}=0\) in probability. Therefore, we obtain \(\lim_{T \to\infty} \hat{a}_{T} =a\) in probability. Similarly, we can prove \(\lim_{T \to\infty} \hat{b}_{T} =b\) in probability.

Case 2: Under assumptions (A1)–(A2). For \(I_{1}\), we have

$$ I_{1}= \frac{\int_{0}^{T} X^{2}_{s} \,ds /T}{ \int_{0}^{T} X_{s}^{2} \,ds/T-(\int_{0}^{T} X_{s} \,ds/T)^{2}} \int_{0}^{T} \,dW_{s}/T. $$

According to Corollary 4.2 and Lemma A.2, we have \(\lim_{T \to\infty} I_{1}=0\), a.s. For \(I_{2}\), we have

$$ I_{2}= \frac{\int_{0}^{T} X_{s} \,ds \int_{0}^{T} X^{2}_{s} \,ds /T^{2}}{ \int_{0}^{T} X_{s}^{2} \,ds/T-(\int_{0}^{T} X_{s} \,ds/T)^{2}} \frac{\int_{0}^{T} X_{s} \,dW_{s}}{\int_{0}^{T} X^{2}_{s} \,ds}. $$

Again by Corollary 4.2 and Lemma A.2, we immediately get \(\lim_{T \to\infty} I_{2}=0\), a.s. Therefore, we obtain \(\lim_{T \to\infty} \hat{a}_{T} =a\) a.s. Similarly, we can prove \(\lim_{T \to\infty} \hat{b}_{T} =b\) a.s. We complete the proof. □

For simplicity of our notations, we denote \(\mu_{1}:= \int_{0}^{\infty}y \pi(dy)\) and \(\mu_{2}:=\int_{0}^{\infty}y^{2} \pi(dy)\). Now we present the following asymptotic normality.

Theorem 4.4

Under assumptions (A1)–(A2). The estimator \(\hat{\theta}_{T}\) of θ is asymptotically normal, i.e.,

$$ \sqrt{T} (\hat{\theta}_{T}- \theta) \xrightarrow{\mathcal{D}} N(0, \Sigma) $$

as \(T \to\infty\), where \(\xrightarrow{\mathcal{D}}\) denotes the convergence in distribution, \(\Sigma=AA'\) and

A= 1 μ 2 μ 1 2 ( μ 2 + μ 1 2 μ 1 μ 2 2 μ 1 μ 2 ) .

By a random scaling, we also have

T ( 1 0 T X s d s T 2 0 T X s d s / T 0 T X s 2 d s / T 0 T X s 2 d s / T + ( 0 T X s d s / T ) 2 0 T X s 2 d s / T ) ( θ ˆ T θ) D N(0,I)

as \(T \to\infty\), where I is the identity matrix.

Proof

We write our estimator in the matrix form

θ ˆ θ= ( 0 T X s 2 d s T 0 T X s 2 d s ( 0 T X s d s ) 2 0 T X s d s T 0 T X s 2 d s ( 0 T X s d s ) 2 0 T X s d s T 0 T X s 2 d s ( 0 T X s d s ) 2 T T 0 T X s 2 d s ( 0 T X s d s ) 2 ) ( 0 T d W s 0 T X s d W s ) .

Let

M t := ( 0 t d W s 0 t X s d W s ) ,

then \((M_{t})_{t\in{\mathbb {R}}_{+}}\) is a 2-dimensional continuous local martingale with \(M_{0}=0\) a.s. and with quadratic variation process

[ M ] t = ( t 0 t X s d s 0 t X s d s 0 T X s 2 d s ) .

Let

Q(t):= ( 1 / t 0 0 1 / t ) .

Then, by Corollary 4.2, we have

Q(t) [ M ] t Q ( t ) T ( 1 μ 1 μ 1 μ 2 ) =ζ ζ a.s. as  T ,

where

ζ:= ( 1 0 μ 1 μ 2 ) .

By applying Lemma (A.3), we get

$$ 1/\sqrt{T} M_{T} \xrightarrow{\mathcal{D}} \zeta Z \quad \text{as $T \to\infty$,} $$
(4.4)

where Z is a 2-dimensional standard normal random vector. Note that, again by Corollary 4.2,

T ( 0 T X s 2 d s T 0 T X s 2 d s ( 0 T X s d s ) 2 0 T X s d s T 0 T X s 2 d s ( 0 T X s d s ) 2 0 T X s d s T 0 T X s 2 d s ( 0 T X s d s ) 2 T T 0 T X s 2 d s ( 0 T X s d s ) 2 ) 1 μ 2 μ 1 2 ( μ 2 μ 1 μ 1 1 ) a.s.  T .
(4.5)

Combining (4.4) with (4.5), by using Slutsky’s lemma, we have

T ( θ ˆ T θ ) = T ( 0 T X s 2 d s T 0 T X s 2 d s ( 0 T X s d s ) 2 0 T X s d s T 0 T X s 2 d s ( 0 T X s d s ) 2 0 T X s d s T 0 T X s 2 d s ( 0 T X s d s ) 2 T T 0 T X s 2 d s ( 0 T X s d s ) 2 ) 1 T M T D 1 μ 2 μ 1 2 ( μ 2 μ 1 μ 1 1 ) ( 1 0 μ 1 μ 2 ) Z = 1 μ 2 μ 1 2 ( μ 2 + μ 1 2 μ 1 μ 2 2 μ 1 μ 2 ) Z = A Z

as \(T \to\infty\). We have proved the first result. Next, it is easy to see that

( 1 0 T X s d s T 2 0 T X s d s / T 0 T X s 2 d s / T 0 T X s 2 d s / T + ( 0 T X s d s / T ) 2 0 T X s 2 d s / T ) ( 1 μ 1 2 μ 1 μ 2 μ 2 + μ 1 2 μ 2 ) = A 1

a.s. \(T \to\infty\). Again by Slutsky’s lemma, we have

T ( 1 0 T X s d s T 2 0 T X s d s / T 0 T X s 2 d s / T 0 T X s 2 d s / T + ( 0 T X s d s / T ) 2 0 T X s 2 d s / T ) ( θ ˆ T θ ) D A 1 A Z = Z , as  T .

We finish the proof. □

Simulation results

In this section, we present some computer simulations. First, we apply Euler–Maruyama method to illustrate the stationary solution of equation (1.1) under assumption (A1). We consider the following two examples.

Examples 5.1

Let \(a=5\), \(b=1\), \(\sigma=1\), \(r=1\), and \(x_{0}=10\) for equation (1.1). Let \((J_{t})_{t\geq0}\) be a Poisson process with intensity 1. Note that the Poisson process with intensity 1 is a subordinator with Lévy measure \(\nu(dz)=\delta_{1}(dz)\). It follows from Proposition 2.5 there is a unique stationary distribution. We apply the Euler–Maruyama method to perform a computer simulation of 30,000 iterations of the single path of \(X_{t}\) with initial value \(x_{0}=10\), \(T=30\), and step size \(\Delta=0.001\), which is shown in Fig. 1.

Figure 1
figure1

(Left) Computer simulation of 30,000 iterations of a single path \(X_{t}\) of Example 5.1. (Right) The histogram of the path

Examples 5.2

Let \(a=5\), \(b=1\), \(\sigma=1\), \(r=1\), and \(x_{0}=10\) for equation (1.1). Let \((J_{t})_{t\geq0}\) be a compound Poisson process with exponentially distributed jump size, namely

$$ \nu(dz)=c \lambda e^{- \lambda z} I_{(0,\infty)}(z) \,dz. $$

We set \(c=1\) and \(\lambda=10\). It is easy to see that ν satisfies assumption (A1). Again by Proposition 2.5 there is a unique stationary distribution. We apply the Euler–Maruyama method to perform a computer simulation of 2000 iterations of the single path of \(X_{t}\) with initial value \(x_{0}=10\), \(T=20\), and step size \(\Delta=0.01\), which is shown in Fig. 2.

Figure 2
figure2

(Left) Computer simulation of 2000 iterations of a single path \(X_{t}\) of Example 5.2. (Right) The histogram of the path

From the simulation paths of Fig. 1 and Fig. 2, we can see their stationary trends. The distributions implied by their histograms can be seen as the approximations of the stationary distributions.

Next, we exhibit the consistency of the MLE. It follows from Theorem 3.3 that our MLE is

θ ˆ T = ( 0 T X s 2 d s 0 T 1 X s ( d X s r d J s ) 0 T X s d s 0 T ( d X s r d J s ) T 0 T X s 2 d s ( 0 T X s d s ) 2 0 T X s d s 0 T 1 X s ( d X s r d J s ) T 0 T ( d X s r d J s ) T 0 T X s 2 d s ( 0 T X s d s ) 2 ) .
(5.1)

We perform 1000 Monte Carlo simulations of the sample paths generated by Example 5.1 and Example 5.2. The results are presented in Table 1. We see that the estimate errors become small as the observation time increases. This is consistent with our theoretical result.

Table 1 Mean and standard deviation of the estimators

Finally, we investigate the asymptotic distribution of the MLE in (5.1). That is, we will focus on the distribution of the following statistic:

β T := T ( 1 0 T X s d s T 2 0 T X s d s / T 0 T X s 2 d s / T 0 T X s 2 d s / T + ( 0 T X s d s / T ) 2 0 T X s 2 d s / T ) ( θ ˆ T θ).

We perform 1000 Monte Carlo simulations for Example 5.1 with \(a=1\), \(b=7\), \(\sigma=1\), \(r=1\), \(T=10\), and \(\Delta=0.01\) and \(x_{0}=10\). The 3D histogram of the 1000 simulations is presented in Fig. 3. By comparing the 3D histogram of the 1000 simulations to the 3D histogram of standard normal distribution (Fig. 3), we can see the tendency of the joint normality. The trend of normality of each element of the estimator \(\beta_{T}\) can be seen from Fig. 4, where the histogram of each element is given.

Figure 3
figure3

(Left) 3D histogram of 1000 Monte Carlo simulations of \(\beta _{T}\) of Example 5.1 with \(a=1\), \(b=7\), \(\sigma=1\), \(r=1\), \(T=10\), and \(\Delta=0.01\) and \(x_{0}=10\). (Right) The 3D histogram of 1000 random vectors from 2-dimensional standard normal distribution

Figure 4
figure4

1000 Monte Carlo simulations of Example 5.1 with \(a=1\), \(b=7\), \(\sigma=1\), \(r=1\), \(T=10\), and \(\Delta=0.01\) and \(x_{0}=10\). (Left) The histogram of the first element of \(\beta_{T}\). (Right) The histogram of the second element of \(\beta_{T}\)

Conclusions

In this paper, we consider a stochastic Lotka–Volterra model with both multiplicative Brownian noises and additive jump noises. Some desired properties of the solution, such as existence and uniqueness of positive strong solution, unique stationary distribution, and exponential ergodicity, are proved. We also investigate the maximum likelihood estimation for the drift coefficients based on continuous time observations. The likelihood function and explicit estimator are derived by using semimartingale theory, and then consistency and asymptotic normality of the estimator are proved. Finally, we give some computer simulations, which are consistent with our theoretical results. The case with multiplicative jump noises will be the subject of future investigation.

References

  1. 1.

    Bahar, A., Mao, X.: Stochastic delay Lotka–Volterra model. J. Math. Anal. Appl. 292(2), 364–380 (2004)

  2. 2.

    Bahar, A., Mao, X.: Stochastic delay population dynamics. Int. J. Pure Appl. Math. 11, 377–400 (2004)

  3. 3.

    Mao, X., Marion, G., Renshaw, E.: Environmental Brownian noise suppresses explosions in population dynamics. Stoch. Process. Appl. 97(1), 95–110 (2002)

  4. 4.

    Mao, X.: Stationary distribution of stochastic population systems. Syst. Control Lett. 60(6), 398–405 (2011)

  5. 5.

    Bao, J., Mao, X., Yin, G., Yuan, C.: Competitive Lotka–Volterra population dynamics with jumps. Nonlinear Anal., Theory Methods Appl. 74(17), 6601–6616 (2011)

  6. 6.

    Bao, J., Yuan, C.: Stochastic population dynamics driven by Lévy noise. J. Math. Anal. Appl. 391(2), 363–375 (2012)

  7. 7.

    Tong, J., Zhang, Z., Bao, J.: The stationary distribution of the facultative population model with a degenerate noise. Stat. Probab. Lett. 83(2), 655–664 (2013)

  8. 8.

    Zhang, Z., Zhang, X., Tong, J.: Exponential ergodicity for population dynamics driven by α-stable processes. Stat. Probab. Lett. 125, 149–159 (2017)

  9. 9.

    Spagnolo, B., Valenti, D., Fiasconaro, A.: Noise in ecosystems: a short review. Math. Biosci. Eng. 1(1), 185–211 (2004)

  10. 10.

    Valenti, D., Fiasconaro, A., Spagnolo, B.: Stochastic resonance and noise delayed extinction in a model of two competing species. Phys. A, Stat. Mech. Appl. 331(3–4), 477–486 (2004)

  11. 11.

    La Cognata, A., Valenti, D., Dubkov, A.A., Spagnolo, B.: Dynamics of two competing species in the presence of Lévy noise sources. Phys. Rev. E 82(1), 011121 (2010)

  12. 12.

    Aït-Sahalia, Y.: Transition densities for interest rate and other nonlinear diffusions. J. Finance 54(4), 1361–1395 (1999)

  13. 13.

    Li, C.: Maximum-likelihood estimation for diffusion processes via closed-form density expansions. Ann. Stat. 41(3), 1350–1380 (2013)

  14. 14.

    Li, C., Chen, D.: Estimating jump-diffusions using closed-form likelihood expansions. J. Econom. 195(1), 51–70 (2016)

  15. 15.

    Barczy, M., Alaya, M.B., Kebaier, A., Pap, G.: Asymptotic properties of maximum likelihood estimator for the growth rate for a jump-type CIR process based on continuous time observations. arXiv preprint. arXiv:1609.05865 (2016)

  16. 16.

    Li, Z., Ma, C.: Asymptotic properties of estimators in a stable Cox–Ingersoll–Ross model. Stoch. Process. Appl. 125(8), 3196–3233 (2015)

  17. 17.

    Kutoyants, Y.A.: Statistical Inference for Ergodic Diffusion Processes. Springer, Berlin (2010)

  18. 18.

    Applebaum, D.: Lévy Processes and Stochastic Calculus, 2nd edn. Cambridge University Press, Cambridge (2009)

  19. 19.

    Valenti, D., Denaro, G., Spagnolo, B., Mazzola, S., Basilone, G., Conversano, F., Bonanno, A.: Stochastic models for phytoplankton dynamics in Mediterranean Sea. Ecol. Complex. 27, 84–103 (2016)

  20. 20.

    Khasminskii, R.: Stochastic Stability of Differential Equations, vol. 66. Springer, Berlin (2011)

  21. 21.

    Meyn, S.P., Tweedie, R.L.: Stability of Markovian processes III: Foster–Lyapunov criteria for continuous-time processes. Adv. Appl. Probab. 25(3), 518–548 (1993)

  22. 22.

    Jacod, J., Protter, P.: Discretization of Processes. Stochastic Modelling and Applied Probability, vol. 67. Springer, Berlin (2011)

  23. 23.

    Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes, vol. 288. Springer, Berlin (2013)

  24. 24.

    Sorensen, M.: Likelihood methods for diffusions with jumps. In: Statistical Inference in Stochastic Processes, pp. 67–105 (1991)

  25. 25.

    Liptser, R.S.: A strong law of large numbers for local martingales. Stochastics 3(1–4), 217–228 (1980)

  26. 26.

    Liptser, R.S., Shiryayev, A.N.: Statistics of Random Processes II. Applications, 2nd edn. Springer, Berlin (2001)

  27. 27.

    van Zanten, H.: A multivariate central limit theorem for continuous local martingales. Stat. Probab. Lett. 50(3), 229–235 (2000)

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (11401029), Teacher Research Capacity Promotion Program of Beijing Normal University Zhuhai, the National Natural Science Foundation of China (11671104), the National Natural Science Foundation of China (71761019), and Jiangxi Provincial Natural Science Foundation (20171ACB21022). The authors appreciate the anonymous referees for their valuable suggestions and questions.

Author information

The first author and the corresponding author contributed to Sects. 1, 2, 3, and 4. The third author contributed to Sect. 5. All authors read and approved the final manuscript.

Correspondence to Chongqi Zhang.

Ethics declarations

Competing interests

The authors declare that they have no competing interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix:  Limit theorems for local martingales

Appendix:  Limit theorems for local martingales

In this section, we recall some limit theorems for local martingales. The first one is a strong law of large numbers for local martingales, e.g., [25].

Lemma A.1

Let \((M_{t})_{t \in{\mathbb {R}}_{+}}\) be a one-dimensional local martingale vanishing at time \(t=0\). For \(t \in{\mathbb {R}}_{+}\), we define

$$ \rho_{M}(t):= \int_{0}^{t} \frac{1}{(1+s)^{2}}\, d{\langle}M{ \rangle}_{s}, $$

where \(({\langle}M{\rangle}_{t})_{t\in{\mathbb {R}}_{+}}\) is Meyer’s angle bracket process. Then

$$ \lim_{t\to\infty} \rho_{M}(t) < \infty\quad\textit{a.s.} $$

implies

$$ \lim_{t\to\infty} \frac{M_{t}}{t}=0 \quad\textit{a.s.} $$

The next result is a strong law of large numbers for continuous local martingales, see, e.g., Lemma 17.4 of [26].

Lemma A.2

Let \((M_{t})_{t \in{\mathbb {R}}_{+}}\) be a one-dimensional square-integrable continuous local martingale vanishing at time \(t=0\). Let \(([M]_{t})_{t\in{\mathbb {R}}_{+}}\) be the quadratic variation process of M such that, for \(t \in{\mathbb {R}}_{+}\),

$$ [M]_{t} < \infty \quad\textit{a.s.} $$

and

$$ [M]_{t} \to\infty\quad\textit{a.s. as }t\to\infty. $$

Then

$$ \lim_{t \to\infty} \frac{M_{t}}{[M]_{t}} =0 \quad\textit{a.s.} $$

The last one is about the asymptotic behavior of continuous multivariate local martingales, see Theorem 4.1 of [27].

Lemma A.3

Let \((M_{t})_{t \in{\mathbb {R}}_{+}}\) be a d-dimensional square-integrable continuous local martingale vanishing at time \(t=0\). Suppose that there exists a function \(Q: {\mathbb {R}}_{+} \to{\mathbb {R}}^{d\times d}\) such that \(Q(t)\) is an invertible (non-random) matrix for all \(t \in{\mathbb {R}}_{+}\), \(\lim_{t \to\infty} \Vert Q(t) \Vert=0\) and

$$ Q(t)[M]_{t} Q(t)^{T} \xrightarrow{{\mathbb {P}}}\zeta \zeta^{T} \quad\textit{as }t\to\infty, $$

where \(\Vert Q(t) \Vert:= \sup\{|Q(t)x| : x\in{\mathbb {R}}^{d}, |x|=1 \}\), \([M]_{t}\) is the quadratic variation process of M and ζ is a \(d\times d\) random matrix. Then

$$ Q(t)M_{t} \xrightarrow{\mathcal{D}} \zeta Z \quad\textit{as }t\to \infty, $$

where Z is a d-dimensional standard normally distributed random vector independent of ζ.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • Stochastic Lotka–Volterra model
  • Subordinator
  • Maximum likelihood estimation
  • Stationary distribution